Test Report: QEMU_macOS 19640

                    
                      e5b440675da001c9bcd97e7df406aef1ef05cbc8:2024-09-13:36202
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.88
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.25
22 TestOffline 10.13
33 TestAddons/parallel/Registry 71.28
46 TestCertOptions 10.28
47 TestCertExpiration 195.39
48 TestDockerFlags 10.28
49 TestForceSystemdFlag 10.11
50 TestForceSystemdEnv 11.61
95 TestFunctional/parallel/ServiceCmdConnect 35.59
167 TestMultiControlPlane/serial/StopSecondaryNode 214.11
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 104.27
169 TestMultiControlPlane/serial/RestartSecondaryNode 209.02
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 234.38
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
174 TestMultiControlPlane/serial/StopCluster 202.09
175 TestMultiControlPlane/serial/RestartCluster 5.26
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 10.18
184 TestJSONOutput/start/Command 9.79
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.05
213 TestMinikubeProfile 10.34
216 TestMountStart/serial/StartWithMountFirst 10.06
219 TestMultiNode/serial/FreshStart2Nodes 10.01
220 TestMultiNode/serial/DeployApp2Nodes 78.97
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.08
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 54
228 TestMultiNode/serial/RestartKeepsNodes 8.52
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 2.02
231 TestMultiNode/serial/RestartMultiNode 5.26
232 TestMultiNode/serial/ValidateNameConflict 20.23
236 TestPreload 10.04
238 TestScheduledStopUnix 9.99
239 TestSkaffold 13.53
242 TestRunningBinaryUpgrade 587.33
244 TestKubernetesUpgrade 19
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.23
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.71
260 TestStoppedBinaryUpgrade/Upgrade 573.3
262 TestPause/serial/Start 10.12
272 TestNoKubernetes/serial/StartWithK8s 9.89
273 TestNoKubernetes/serial/StartWithStopK8s 5.3
274 TestNoKubernetes/serial/Start 5.29
278 TestNoKubernetes/serial/StartNoArgs 5.32
280 TestNetworkPlugins/group/auto/Start 9.81
281 TestNetworkPlugins/group/kindnet/Start 9.83
282 TestNetworkPlugins/group/calico/Start 10.01
283 TestNetworkPlugins/group/custom-flannel/Start 9.92
284 TestNetworkPlugins/group/false/Start 9.93
285 TestNetworkPlugins/group/enable-default-cni/Start 9.97
286 TestNetworkPlugins/group/flannel/Start 9.75
287 TestNetworkPlugins/group/bridge/Start 9.78
288 TestNetworkPlugins/group/kubenet/Start 9.85
290 TestStartStop/group/old-k8s-version/serial/FirstStart 10.11
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 9.89
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 5.25
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
312 TestStartStop/group/embed-certs/serial/FirstStart 10.02
313 TestStartStop/group/no-preload/serial/Pause 0.12
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 12.02
316 TestStartStop/group/embed-certs/serial/DeployApp 0.1
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.14
319 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
323 TestStartStop/group/embed-certs/serial/SecondStart 5.25
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.35
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
329 TestStartStop/group/embed-certs/serial/Pause 0.1
331 TestStartStop/group/newest-cni/serial/FirstStart 10.06
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
340 TestStartStop/group/newest-cni/serial/SecondStart 5.25
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (16.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-882000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-882000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (16.875108417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"45c129ba-336f-4996-9932-33c6db61442c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-882000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d2c4b08c-9b1a-4329-a6cd-57db1db10e87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"a4f28a30-8965-4026-ac30-2c02b3b1fea0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig"}}
	{"specversion":"1.0","id":"8ba40f7e-4163-471b-99f4-d5321a9e94cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"42ed3d9a-f9d3-4a13-80b8-f2602b7cef9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"39aec215-37c5-4018-addb-50baeaee5def","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube"}}
	{"specversion":"1.0","id":"e6397696-09a1-43ec-82e9-9bbbb866ff1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"064bab75-9305-4f25-9923-bedd3081d046","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f632972e-c3d4-4eb3-bf76-1fc7872f9667","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2522d2bc-d9f1-4321-8e78-c3910aca59c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"10296b55-e609-47fc-9af3-c1d65d5b1def","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-882000\" primary control-plane node in \"download-only-882000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"55eb655a-a55f-444d-9ae4-08f4a56cd435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cec3acfc-512f-40ec-a3f5-f55289b7233f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106939780 0x106939780 0x106939780 0x106939780 0x106939780 0x106939780 0x106939780] Decompressors:map[bz2:0x140003bd540 gz:0x140003bd548 tar:0x140003bd4f0 tar.bz2:0x140003bd500 tar.gz:0x140003bd510 tar.xz:0x140003bd520 tar.zst:0x140003bd530 tbz2:0x140003bd500 tgz:0x14
0003bd510 txz:0x140003bd520 tzst:0x140003bd530 xz:0x140003bd550 zip:0x140003bd560 zst:0x140003bd558] Getters:map[file:0x14001462550 http:0x14000828190 https:0x140008281e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"32337dff-1776-42e3-b76b-59a8db613d75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 16:25:32.583864    1884 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:25:32.584007    1884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:32.584010    1884 out.go:358] Setting ErrFile to fd 2...
	I0913 16:25:32.584013    1884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:32.584161    1884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	W0913 16:25:32.584254    1884 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19640-1360/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19640-1360/.minikube/config/config.json: no such file or directory
	I0913 16:25:32.585532    1884 out.go:352] Setting JSON to true
	I0913 16:25:32.602884    1884 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1496,"bootTime":1726268436,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 16:25:32.602956    1884 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 16:25:32.607382    1884 out.go:97] [download-only-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 16:25:32.607511    1884 notify.go:220] Checking for updates...
	W0913 16:25:32.607530    1884 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 16:25:32.610200    1884 out.go:169] MINIKUBE_LOCATION=19640
	I0913 16:25:32.613303    1884 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 16:25:32.617371    1884 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 16:25:32.620305    1884 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 16:25:32.623313    1884 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	W0913 16:25:32.627806    1884 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 16:25:32.628018    1884 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 16:25:32.633229    1884 out.go:97] Using the qemu2 driver based on user configuration
	I0913 16:25:32.633247    1884 start.go:297] selected driver: qemu2
	I0913 16:25:32.633260    1884 start.go:901] validating driver "qemu2" against <nil>
	I0913 16:25:32.633326    1884 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 16:25:32.636308    1884 out.go:169] Automatically selected the socket_vmnet network
	I0913 16:25:32.641961    1884 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0913 16:25:32.642046    1884 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 16:25:32.642094    1884 cni.go:84] Creating CNI manager for ""
	I0913 16:25:32.642128    1884 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 16:25:32.642175    1884 start.go:340] cluster config:
	{Name:download-only-882000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-882000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 16:25:32.647251    1884 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 16:25:32.650360    1884 out.go:97] Downloading VM boot image ...
	I0913 16:25:32.650377    1884 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso
	I0913 16:25:40.753408    1884 out.go:97] Starting "download-only-882000" primary control-plane node in "download-only-882000" cluster
	I0913 16:25:40.753429    1884 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 16:25:40.818284    1884 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 16:25:40.818300    1884 cache.go:56] Caching tarball of preloaded images
	I0913 16:25:40.818499    1884 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 16:25:40.822674    1884 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 16:25:40.822681    1884 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 16:25:40.900486    1884 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 16:25:48.206215    1884 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 16:25:48.206392    1884 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 16:25:48.903997    1884 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 16:25:48.904198    1884 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/download-only-882000/config.json ...
	I0913 16:25:48.904215    1884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/download-only-882000/config.json: {Name:mk58a2c4a4c645f58b2f0c31f52a004fa38a922f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:25:48.904447    1884 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 16:25:48.904638    1884 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0913 16:25:49.384408    1884 out.go:193] 
	W0913 16:25:49.389405    1884 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106939780 0x106939780 0x106939780 0x106939780 0x106939780 0x106939780 0x106939780] Decompressors:map[bz2:0x140003bd540 gz:0x140003bd548 tar:0x140003bd4f0 tar.bz2:0x140003bd500 tar.gz:0x140003bd510 tar.xz:0x140003bd520 tar.zst:0x140003bd530 tbz2:0x140003bd500 tgz:0x140003bd510 txz:0x140003bd520 tzst:0x140003bd530 xz:0x140003bd550 zip:0x140003bd560 zst:0x140003bd558] Getters:map[file:0x14001462550 http:0x14000828190 https:0x140008281e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0913 16:25:49.389431    1884 out_reason.go:110] 
	W0913 16:25:49.396189    1884 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 16:25:49.400467    1884 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-882000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (16.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.25s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-235000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-235000 --alsologtostderr --binary-mirror http://127.0.0.1:49312 --driver=qemu2 : exit status 40 (151.969041ms)

                                                
                                                
-- stdout --
	* [binary-mirror-235000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-235000" primary control-plane node in "binary-mirror-235000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 16:25:59.506809    1947 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:25:59.506934    1947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:59.506938    1947 out.go:358] Setting ErrFile to fd 2...
	I0913 16:25:59.506940    1947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:59.507086    1947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 16:25:59.508214    1947 out.go:352] Setting JSON to false
	I0913 16:25:59.524390    1947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1523,"bootTime":1726268436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 16:25:59.524470    1947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 16:25:59.529394    1947 out.go:177] * [binary-mirror-235000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 16:25:59.538267    1947 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 16:25:59.538315    1947 notify.go:220] Checking for updates...
	I0913 16:25:59.544322    1947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 16:25:59.547291    1947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 16:25:59.548801    1947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 16:25:59.552332    1947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 16:25:59.555467    1947 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 16:25:59.559143    1947 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 16:25:59.566310    1947 start.go:297] selected driver: qemu2
	I0913 16:25:59.566317    1947 start.go:901] validating driver "qemu2" against <nil>
	I0913 16:25:59.566355    1947 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 16:25:59.569303    1947 out.go:177] * Automatically selected the socket_vmnet network
	I0913 16:25:59.574523    1947 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0913 16:25:59.574635    1947 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 16:25:59.574656    1947 cni.go:84] Creating CNI manager for ""
	I0913 16:25:59.574679    1947 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 16:25:59.574686    1947 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 16:25:59.574738    1947 start.go:340] cluster config:
	{Name:binary-mirror-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-235000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49312 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_
vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 16:25:59.578557    1947 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 16:25:59.585273    1947 out.go:177] * Starting "binary-mirror-235000" primary control-plane node in "binary-mirror-235000" cluster
	I0913 16:25:59.589273    1947 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 16:25:59.589286    1947 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 16:25:59.589294    1947 cache.go:56] Caching tarball of preloaded images
	I0913 16:25:59.589354    1947 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 16:25:59.589359    1947 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 16:25:59.589539    1947 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/binary-mirror-235000/config.json ...
	I0913 16:25:59.589551    1947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/binary-mirror-235000/config.json: {Name:mkddcac284a0f5b26d22b9d9f8ab15005a782200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:25:59.589944    1947 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 16:25:59.589992    1947 download.go:107] Downloading: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I0913 16:25:59.607449    1947 out.go:201] 
	W0913 16:25:59.611304    1947 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106ad1780 0x106ad1780 0x106ad1780 0x106ad1780 0x106ad1780 0x106ad1780 0x106ad1780] Decompressors:map[bz2:0x140001839c0 gz:0x140001839c8 tar:0x14000183970 tar.bz2:0x14000183980 tar.gz:0x14000183990 tar.xz:0x140001839a0 tar.zst:0x140001839b0 tbz2:0x14000183980 tgz:0x14000183990 txz:0x140001839a0 tzst:0x140001839b0 xz:0x140001839d0 zip:0x140001839e0 zst:0x140001839d8] Getters:map[file:0x14000184fc0 http:0x14000caefa0 https:0x14000caeff0] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49312/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106ad1780 0x106ad1780 0x106ad1780 0x106ad1780 0x106ad1780 0x106ad1780 0x106ad1780] Decompressors:map[bz2:0x140001839c0 gz:0x140001839c8 tar:0x14000183970 tar.bz2:0x14000183980 tar.gz:0x14000183990 tar.xz:0x140001839a0 tar.zst:0x140001839b0 tbz2:0x14000183980 tgz:0x14000183990 txz:0x140001839a0 tzst:0x140001839b0 xz:0x140001839d0 zip:0x140001839e0 zst:0x140001839d8] Getters:map[file:0x14000184fc0 http:0x14000caefa0 https:0x14000caeff0] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W0913 16:25:59.611313    1947 out.go:270] * 
	* 
	W0913 16:25:59.611763    1947 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 16:25:59.626366    1947 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-235000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49312" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-235000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-235000
--- FAIL: TestBinaryMirror (0.25s)

                                                
                                    
x
+
TestOffline (10.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-070000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-070000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.973366042s)

                                                
                                                
-- stdout --
	* [offline-docker-070000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-070000" primary control-plane node in "offline-docker-070000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-070000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:10:25.708321    4830 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:10:25.708478    4830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:10:25.708482    4830 out.go:358] Setting ErrFile to fd 2...
	I0913 17:10:25.708484    4830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:10:25.708607    4830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:10:25.709776    4830 out.go:352] Setting JSON to false
	I0913 17:10:25.727037    4830 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4189,"bootTime":1726268436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:10:25.727109    4830 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:10:25.731908    4830 out.go:177] * [offline-docker-070000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:10:25.737253    4830 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:10:25.737307    4830 notify.go:220] Checking for updates...
	I0913 17:10:25.744975    4830 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:10:25.746153    4830 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:10:25.749006    4830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:10:25.751960    4830 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:10:25.754995    4830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:10:25.758399    4830 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:10:25.758466    4830 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:10:25.763018    4830 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:10:25.769916    4830 start.go:297] selected driver: qemu2
	I0913 17:10:25.769926    4830 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:10:25.769937    4830 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:10:25.771848    4830 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:10:25.774971    4830 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:10:25.778174    4830 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:10:25.778192    4830 cni.go:84] Creating CNI manager for ""
	I0913 17:10:25.778218    4830 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:10:25.778222    4830 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:10:25.778265    4830 start.go:340] cluster config:
	{Name:offline-docker-070000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:10:25.781897    4830 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:10:25.788964    4830 out.go:177] * Starting "offline-docker-070000" primary control-plane node in "offline-docker-070000" cluster
	I0913 17:10:25.791916    4830 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:10:25.791940    4830 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:10:25.791954    4830 cache.go:56] Caching tarball of preloaded images
	I0913 17:10:25.792045    4830 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:10:25.792050    4830 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:10:25.792115    4830 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/offline-docker-070000/config.json ...
	I0913 17:10:25.792126    4830 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/offline-docker-070000/config.json: {Name:mka15e2bd44f6e50eee16106c3136ab7df158ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:10:25.792351    4830 start.go:360] acquireMachinesLock for offline-docker-070000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:10:25.792393    4830 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "offline-docker-070000"
	I0913 17:10:25.792403    4830 start.go:93] Provisioning new machine with config: &{Name:offline-docker-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:10:25.792426    4830 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:10:25.810786    4830 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 17:10:25.826876    4830 start.go:159] libmachine.API.Create for "offline-docker-070000" (driver="qemu2")
	I0913 17:10:25.826917    4830 client.go:168] LocalClient.Create starting
	I0913 17:10:25.827013    4830 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:10:25.827043    4830 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:25.827052    4830 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:25.827106    4830 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:10:25.827130    4830 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:25.827140    4830 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:25.827522    4830 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:10:25.986516    4830 main.go:141] libmachine: Creating SSH key...
	I0913 17:10:26.163424    4830 main.go:141] libmachine: Creating Disk image...
	I0913 17:10:26.163433    4830 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:10:26.168734    4830 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2
	I0913 17:10:26.185507    4830 main.go:141] libmachine: STDOUT: 
	I0913 17:10:26.185524    4830 main.go:141] libmachine: STDERR: 
	I0913 17:10:26.185594    4830 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2 +20000M
	I0913 17:10:26.194396    4830 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:10:26.194417    4830 main.go:141] libmachine: STDERR: 
	I0913 17:10:26.194446    4830 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2
	I0913 17:10:26.194451    4830 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:10:26.194466    4830 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:10:26.194500    4830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:21:94:86:a3:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2
	I0913 17:10:26.196456    4830 main.go:141] libmachine: STDOUT: 
	I0913 17:10:26.196509    4830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:10:26.196534    4830 client.go:171] duration metric: took 369.616917ms to LocalClient.Create
	I0913 17:10:28.198600    4830 start.go:128] duration metric: took 2.406198334s to createHost
	I0913 17:10:28.198642    4830 start.go:83] releasing machines lock for "offline-docker-070000", held for 2.406280625s
	W0913 17:10:28.198670    4830 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:28.217366    4830 out.go:177] * Deleting "offline-docker-070000" in qemu2 ...
	W0913 17:10:28.235930    4830 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:28.235940    4830 start.go:729] Will try again in 5 seconds ...
	I0913 17:10:33.237957    4830 start.go:360] acquireMachinesLock for offline-docker-070000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:10:33.238095    4830 start.go:364] duration metric: took 96.375µs to acquireMachinesLock for "offline-docker-070000"
	I0913 17:10:33.238123    4830 start.go:93] Provisioning new machine with config: &{Name:offline-docker-070000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-070000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:10:33.238183    4830 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:10:33.248359    4830 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 17:10:33.264909    4830 start.go:159] libmachine.API.Create for "offline-docker-070000" (driver="qemu2")
	I0913 17:10:33.264940    4830 client.go:168] LocalClient.Create starting
	I0913 17:10:33.265011    4830 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:10:33.265041    4830 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:33.265050    4830 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:33.265081    4830 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:10:33.265103    4830 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:33.265116    4830 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:33.265389    4830 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:10:33.421072    4830 main.go:141] libmachine: Creating SSH key...
	I0913 17:10:33.587626    4830 main.go:141] libmachine: Creating Disk image...
	I0913 17:10:33.587637    4830 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:10:33.587832    4830 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2
	I0913 17:10:33.597493    4830 main.go:141] libmachine: STDOUT: 
	I0913 17:10:33.597523    4830 main.go:141] libmachine: STDERR: 
	I0913 17:10:33.597596    4830 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2 +20000M
	I0913 17:10:33.606228    4830 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:10:33.606245    4830 main.go:141] libmachine: STDERR: 
	I0913 17:10:33.606267    4830 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2
	I0913 17:10:33.606279    4830 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:10:33.606289    4830 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:10:33.606315    4830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:4c:90:7b:a6:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/offline-docker-070000/disk.qcow2
	I0913 17:10:33.607974    4830 main.go:141] libmachine: STDOUT: 
	I0913 17:10:33.607989    4830 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:10:33.608004    4830 client.go:171] duration metric: took 343.065459ms to LocalClient.Create
	I0913 17:10:35.610195    4830 start.go:128] duration metric: took 2.372022625s to createHost
	I0913 17:10:35.610286    4830 start.go:83] releasing machines lock for "offline-docker-070000", held for 2.372215083s
	W0913 17:10:35.610658    4830 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-070000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:35.620243    4830 out.go:201] 
	W0913 17:10:35.624344    4830 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:10:35.624379    4830 out.go:270] * 
	* 
	W0913 17:10:35.627267    4830 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:10:35.637130    4830 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-070000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-09-13 17:10:35.652195 -0700 PDT m=+2703.155065959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-070000 -n offline-docker-070000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-070000 -n offline-docker-070000: exit status 7 (72.385959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-070000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-070000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-070000
--- FAIL: TestOffline (10.13s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.061125ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-b2tw6" [f3db3871-9bfc-4a43-96ef-55856578d904] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010638958s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nvbdn" [cd8d98f1-6d88-45e6-a0e4-d8808da7a54f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005881209s
addons_test.go:342: (dbg) Run:  kubectl --context addons-979000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-979000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-979000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.059573292s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-979000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 ip
2024/09/13 16:39:07 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-979000 -n addons-979000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-882000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT |                     |
	|         | -p download-only-882000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:25 PDT |
	| delete  | -p download-only-882000              | download-only-882000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:25 PDT |
	| start   | -o=json --download-only              | download-only-302000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT |                     |
	|         | -p download-only-302000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:25 PDT |
	| delete  | -p download-only-302000              | download-only-302000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:25 PDT |
	| delete  | -p download-only-882000              | download-only-882000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:25 PDT |
	| delete  | -p download-only-302000              | download-only-302000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:25 PDT |
	| start   | --download-only -p                   | binary-mirror-235000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT |                     |
	|         | binary-mirror-235000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49312               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-235000              | binary-mirror-235000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:25 PDT |
	| addons  | disable dashboard -p                 | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT |                     |
	|         | addons-979000                        |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT |                     |
	|         | addons-979000                        |                      |         |         |                     |                     |
	| start   | -p addons-979000 --wait=true         | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:29 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-979000 addons disable         | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:29 PDT | 13 Sep 24 16:29 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-979000 addons                 | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:38 PDT | 13 Sep 24 16:38 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-979000 addons                 | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:38 PDT | 13 Sep 24 16:38 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-979000 addons                 | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:38 PDT | 13 Sep 24 16:38 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:38 PDT | 13 Sep 24 16:38 PDT |
	|         | addons-979000                        |                      |         |         |                     |                     |
	| ssh     | addons-979000 ssh curl -s            | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:38 PDT | 13 Sep 24 16:38 PDT |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-979000 ip                     | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:38 PDT | 13 Sep 24 16:38 PDT |
	| addons  | addons-979000 addons disable         | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:38 PDT | 13 Sep 24 16:38 PDT |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-979000 addons disable         | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:38 PDT | 13 Sep 24 16:39 PDT |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| ip      | addons-979000 ip                     | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:39 PDT | 13 Sep 24 16:39 PDT |
	| addons  | addons-979000 addons disable         | addons-979000        | jenkins | v1.34.0 | 13 Sep 24 16:39 PDT | 13 Sep 24 16:39 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 16:25:59
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 16:25:59.784018    1961 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:25:59.784156    1961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:59.784160    1961 out.go:358] Setting ErrFile to fd 2...
	I0913 16:25:59.784162    1961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:59.784285    1961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 16:25:59.785406    1961 out.go:352] Setting JSON to false
	I0913 16:25:59.801520    1961 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1523,"bootTime":1726268436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 16:25:59.801586    1961 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 16:25:59.806398    1961 out.go:177] * [addons-979000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 16:25:59.813360    1961 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 16:25:59.813407    1961 notify.go:220] Checking for updates...
	I0913 16:25:59.820342    1961 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 16:25:59.823308    1961 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 16:25:59.826250    1961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 16:25:59.829308    1961 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 16:25:59.832326    1961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 16:25:59.835481    1961 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 16:25:59.840280    1961 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 16:25:59.847217    1961 start.go:297] selected driver: qemu2
	I0913 16:25:59.847224    1961 start.go:901] validating driver "qemu2" against <nil>
	I0913 16:25:59.847229    1961 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 16:25:59.849395    1961 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 16:25:59.852284    1961 out.go:177] * Automatically selected the socket_vmnet network
	I0913 16:25:59.855462    1961 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 16:25:59.855488    1961 cni.go:84] Creating CNI manager for ""
	I0913 16:25:59.855510    1961 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 16:25:59.855519    1961 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 16:25:59.855548    1961 start.go:340] cluster config:
	{Name:addons-979000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 16:25:59.859248    1961 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 16:25:59.867315    1961 out.go:177] * Starting "addons-979000" primary control-plane node in "addons-979000" cluster
	I0913 16:25:59.871355    1961 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 16:25:59.871370    1961 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 16:25:59.871383    1961 cache.go:56] Caching tarball of preloaded images
	I0913 16:25:59.871449    1961 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 16:25:59.871455    1961 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 16:25:59.871679    1961 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/config.json ...
	I0913 16:25:59.871690    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/config.json: {Name:mkd8724cab105ee8b37c67f73122d82fcb973162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:25:59.872098    1961 start.go:360] acquireMachinesLock for addons-979000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 16:25:59.872165    1961 start.go:364] duration metric: took 60.958µs to acquireMachinesLock for "addons-979000"
	I0913 16:25:59.872176    1961 start.go:93] Provisioning new machine with config: &{Name:addons-979000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 16:25:59.872206    1961 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 16:25:59.881276    1961 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0913 16:26:00.120682    1961 start.go:159] libmachine.API.Create for "addons-979000" (driver="qemu2")
	I0913 16:26:00.120740    1961 client.go:168] LocalClient.Create starting
	I0913 16:26:00.120942    1961 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 16:26:00.256071    1961 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 16:26:00.447662    1961 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 16:26:00.719442    1961 main.go:141] libmachine: Creating SSH key...
	I0913 16:26:00.760643    1961 main.go:141] libmachine: Creating Disk image...
	I0913 16:26:00.760649    1961 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 16:26:00.760925    1961 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/disk.qcow2
	I0913 16:26:00.779846    1961 main.go:141] libmachine: STDOUT: 
	I0913 16:26:00.779869    1961 main.go:141] libmachine: STDERR: 
	I0913 16:26:00.779933    1961 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/disk.qcow2 +20000M
	I0913 16:26:00.788030    1961 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 16:26:00.788045    1961 main.go:141] libmachine: STDERR: 
	I0913 16:26:00.788058    1961 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/disk.qcow2
	I0913 16:26:00.788063    1961 main.go:141] libmachine: Starting QEMU VM...
	I0913 16:26:00.788100    1961 qemu.go:418] Using hvf for hardware acceleration
	I0913 16:26:00.788134    1961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:de:e0:64:60:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/disk.qcow2
	I0913 16:26:00.845278    1961 main.go:141] libmachine: STDOUT: 
	I0913 16:26:00.845325    1961 main.go:141] libmachine: STDERR: 
	I0913 16:26:00.845330    1961 main.go:141] libmachine: Attempt 0
	I0913 16:26:00.845342    1961 main.go:141] libmachine: Searching for 46:de:e0:64:60:90 in /var/db/dhcpd_leases ...
	I0913 16:26:00.845416    1961 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 16:26:00.845434    1961 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e61b5e}
	I0913 16:26:02.847568    1961 main.go:141] libmachine: Attempt 1
	I0913 16:26:02.847656    1961 main.go:141] libmachine: Searching for 46:de:e0:64:60:90 in /var/db/dhcpd_leases ...
	I0913 16:26:02.848093    1961 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 16:26:02.848147    1961 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e61b5e}
	I0913 16:26:04.849406    1961 main.go:141] libmachine: Attempt 2
	I0913 16:26:04.849578    1961 main.go:141] libmachine: Searching for 46:de:e0:64:60:90 in /var/db/dhcpd_leases ...
	I0913 16:26:04.849956    1961 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 16:26:04.850052    1961 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e61b5e}
	I0913 16:26:06.852212    1961 main.go:141] libmachine: Attempt 3
	I0913 16:26:06.852245    1961 main.go:141] libmachine: Searching for 46:de:e0:64:60:90 in /var/db/dhcpd_leases ...
	I0913 16:26:06.852379    1961 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 16:26:06.852397    1961 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e61b5e}
	I0913 16:26:08.854408    1961 main.go:141] libmachine: Attempt 4
	I0913 16:26:08.854417    1961 main.go:141] libmachine: Searching for 46:de:e0:64:60:90 in /var/db/dhcpd_leases ...
	I0913 16:26:08.854499    1961 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 16:26:08.854507    1961 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e61b5e}
	I0913 16:26:10.856566    1961 main.go:141] libmachine: Attempt 5
	I0913 16:26:10.856607    1961 main.go:141] libmachine: Searching for 46:de:e0:64:60:90 in /var/db/dhcpd_leases ...
	I0913 16:26:10.856685    1961 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 16:26:10.856702    1961 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e61b5e}
	I0913 16:26:12.858735    1961 main.go:141] libmachine: Attempt 6
	I0913 16:26:12.858752    1961 main.go:141] libmachine: Searching for 46:de:e0:64:60:90 in /var/db/dhcpd_leases ...
	I0913 16:26:12.858830    1961 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0913 16:26:12.858839    1961 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66e61b5e}
	I0913 16:26:14.860950    1961 main.go:141] libmachine: Attempt 7
	I0913 16:26:14.861028    1961 main.go:141] libmachine: Searching for 46:de:e0:64:60:90 in /var/db/dhcpd_leases ...
	I0913 16:26:14.861481    1961 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0913 16:26:14.861532    1961 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:46:de:e0:64:60:90 ID:1,46:de:e0:64:60:90 Lease:0x66e61b95}
	I0913 16:26:14.861550    1961 main.go:141] libmachine: Found match: 46:de:e0:64:60:90
	I0913 16:26:14.861588    1961 main.go:141] libmachine: IP: 192.168.105.2
	I0913 16:26:14.861612    1961 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0913 16:26:17.879503    1961 machine.go:93] provisionDockerMachine start ...
	I0913 16:26:17.881123    1961 main.go:141] libmachine: Using SSH client type: native
	I0913 16:26:17.881608    1961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104635190] 0x1046379d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 16:26:17.881629    1961 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 16:26:17.955951    1961 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 16:26:17.955988    1961 buildroot.go:166] provisioning hostname "addons-979000"
	I0913 16:26:17.956151    1961 main.go:141] libmachine: Using SSH client type: native
	I0913 16:26:17.956422    1961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104635190] 0x1046379d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 16:26:17.956434    1961 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-979000 && echo "addons-979000" | sudo tee /etc/hostname
	I0913 16:26:18.025227    1961 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-979000
	
	I0913 16:26:18.025315    1961 main.go:141] libmachine: Using SSH client type: native
	I0913 16:26:18.025485    1961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104635190] 0x1046379d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 16:26:18.025498    1961 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-979000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-979000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-979000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 16:26:18.082109    1961 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 16:26:18.082125    1961 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19640-1360/.minikube CaCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19640-1360/.minikube}
	I0913 16:26:18.082133    1961 buildroot.go:174] setting up certificates
	I0913 16:26:18.082144    1961 provision.go:84] configureAuth start
	I0913 16:26:18.082152    1961 provision.go:143] copyHostCerts
	I0913 16:26:18.082277    1961 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem (1078 bytes)
	I0913 16:26:18.082518    1961 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem (1123 bytes)
	I0913 16:26:18.082641    1961 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem (1679 bytes)
	I0913 16:26:18.082742    1961 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem org=jenkins.addons-979000 san=[127.0.0.1 192.168.105.2 addons-979000 localhost minikube]
	I0913 16:26:18.411585    1961 provision.go:177] copyRemoteCerts
	I0913 16:26:18.411681    1961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 16:26:18.411703    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:18.440255    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 16:26:18.448835    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 16:26:18.456967    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 16:26:18.464946    1961 provision.go:87] duration metric: took 382.796875ms to configureAuth
	I0913 16:26:18.464955    1961 buildroot.go:189] setting minikube options for container-runtime
	I0913 16:26:18.465071    1961 config.go:182] Loaded profile config "addons-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 16:26:18.465113    1961 main.go:141] libmachine: Using SSH client type: native
	I0913 16:26:18.465203    1961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104635190] 0x1046379d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 16:26:18.465208    1961 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 16:26:18.514464    1961 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0913 16:26:18.514473    1961 buildroot.go:70] root file system type: tmpfs
	I0913 16:26:18.514523    1961 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 16:26:18.514590    1961 main.go:141] libmachine: Using SSH client type: native
	I0913 16:26:18.514696    1961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104635190] 0x1046379d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 16:26:18.514729    1961 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 16:26:18.570914    1961 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 16:26:18.570986    1961 main.go:141] libmachine: Using SSH client type: native
	I0913 16:26:18.571107    1961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104635190] 0x1046379d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 16:26:18.571115    1961 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 16:26:19.971194    1961 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0913 16:26:19.971207    1961 machine.go:96] duration metric: took 2.091710459s to provisionDockerMachine
	I0913 16:26:19.971214    1961 client.go:171] duration metric: took 19.850831209s to LocalClient.Create
	I0913 16:26:19.971227    1961 start.go:167] duration metric: took 19.850913375s to libmachine.API.Create "addons-979000"
	I0913 16:26:19.971231    1961 start.go:293] postStartSetup for "addons-979000" (driver="qemu2")
	I0913 16:26:19.971239    1961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 16:26:19.971330    1961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 16:26:19.971342    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:20.003532    1961 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 16:26:20.005006    1961 info.go:137] Remote host: Buildroot 2023.02.9
	I0913 16:26:20.005014    1961 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19640-1360/.minikube/addons for local assets ...
	I0913 16:26:20.005123    1961 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19640-1360/.minikube/files for local assets ...
	I0913 16:26:20.005157    1961 start.go:296] duration metric: took 33.922125ms for postStartSetup
	I0913 16:26:20.005575    1961 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/config.json ...
	I0913 16:26:20.005762    1961 start.go:128] duration metric: took 20.133919334s to createHost
	I0913 16:26:20.005787    1961 main.go:141] libmachine: Using SSH client type: native
	I0913 16:26:20.005880    1961 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104635190] 0x1046379d0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0913 16:26:20.005885    1961 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 16:26:20.053742    1961 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726269980.280471795
	
	I0913 16:26:20.053750    1961 fix.go:216] guest clock: 1726269980.280471795
	I0913 16:26:20.053755    1961 fix.go:229] Guest: 2024-09-13 16:26:20.280471795 -0700 PDT Remote: 2024-09-13 16:26:20.005765 -0700 PDT m=+20.240095917 (delta=274.706795ms)
	I0913 16:26:20.053766    1961 fix.go:200] guest clock delta is within tolerance: 274.706795ms
	I0913 16:26:20.053769    1961 start.go:83] releasing machines lock for "addons-979000", held for 20.181967s
	I0913 16:26:20.054055    1961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 16:26:20.054055    1961 ssh_runner.go:195] Run: cat /version.json
	I0913 16:26:20.054086    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:20.054086    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:20.127474    1961 ssh_runner.go:195] Run: systemctl --version
	I0913 16:26:20.129974    1961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 16:26:20.132046    1961 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 16:26:20.132081    1961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 16:26:20.138795    1961 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 16:26:20.138802    1961 start.go:495] detecting cgroup driver to use...
	I0913 16:26:20.138918    1961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 16:26:20.145517    1961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0913 16:26:20.149143    1961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 16:26:20.152872    1961 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 16:26:20.152904    1961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 16:26:20.156643    1961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 16:26:20.160428    1961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 16:26:20.164451    1961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 16:26:20.168327    1961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 16:26:20.172428    1961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 16:26:20.176225    1961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 16:26:20.180236    1961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 16:26:20.184197    1961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 16:26:20.188230    1961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 16:26:20.191985    1961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 16:26:20.273384    1961 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 16:26:20.283902    1961 start.go:495] detecting cgroup driver to use...
	I0913 16:26:20.283988    1961 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 16:26:20.291377    1961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 16:26:20.296619    1961 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 16:26:20.305654    1961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 16:26:20.310766    1961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 16:26:20.315877    1961 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0913 16:26:20.348584    1961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 16:26:20.354354    1961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 16:26:20.360488    1961 ssh_runner.go:195] Run: which cri-dockerd
	I0913 16:26:20.362084    1961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 16:26:20.365171    1961 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0913 16:26:20.371255    1961 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 16:26:20.440888    1961 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 16:26:20.528967    1961 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 16:26:20.529022    1961 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 16:26:20.535249    1961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 16:26:20.618069    1961 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 16:26:22.802712    1961 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.184663916s)
	I0913 16:26:22.802793    1961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 16:26:22.808390    1961 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0913 16:26:22.815796    1961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 16:26:22.821474    1961 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 16:26:22.900535    1961 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 16:26:22.981089    1961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 16:26:23.064764    1961 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 16:26:23.071316    1961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 16:26:23.077057    1961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 16:26:23.169532    1961 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 16:26:23.194922    1961 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 16:26:23.195014    1961 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 16:26:23.197354    1961 start.go:563] Will wait 60s for crictl version
	I0913 16:26:23.197402    1961 ssh_runner.go:195] Run: which crictl
	I0913 16:26:23.198978    1961 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 16:26:23.218304    1961 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0913 16:26:23.218390    1961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 16:26:23.230351    1961 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 16:26:23.244994    1961 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0913 16:26:23.245143    1961 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0913 16:26:23.246712    1961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 16:26:23.251317    1961 kubeadm.go:883] updating cluster {Name:addons-979000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 16:26:23.251369    1961 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 16:26:23.251421    1961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 16:26:23.257432    1961 docker.go:685] Got preloaded images: 
	I0913 16:26:23.257445    1961 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0913 16:26:23.257495    1961 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 16:26:23.261030    1961 ssh_runner.go:195] Run: which lz4
	I0913 16:26:23.262510    1961 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 16:26:23.263976    1961 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 16:26:23.263989    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322160019 bytes)
	I0913 16:26:24.520106    1961 docker.go:649] duration metric: took 1.257664667s to copy over tarball
	I0913 16:26:24.520174    1961 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 16:26:25.487301    1961 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 16:26:25.502394    1961 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 16:26:25.505937    1961 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0913 16:26:25.512051    1961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 16:26:25.597940    1961 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 16:26:27.802733    1961 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.204816625s)
	I0913 16:26:27.802849    1961 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 16:26:27.808758    1961 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 16:26:27.808767    1961 cache_images.go:84] Images are preloaded, skipping loading
	I0913 16:26:27.808790    1961 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.1 docker true true} ...
	I0913 16:26:27.808858    1961 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-979000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 16:26:27.808925    1961 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 16:26:27.829821    1961 cni.go:84] Creating CNI manager for ""
	I0913 16:26:27.829841    1961 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 16:26:27.829852    1961 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 16:26:27.829862    1961 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-979000 NodeName:addons-979000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 16:26:27.829919    1961 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-979000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 16:26:27.829996    1961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 16:26:27.833513    1961 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 16:26:27.833549    1961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 16:26:27.836725    1961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0913 16:26:27.842648    1961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 16:26:27.848306    1961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0913 16:26:27.854206    1961 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0913 16:26:27.855470    1961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 16:26:27.859858    1961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 16:26:27.946144    1961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 16:26:27.954387    1961 certs.go:68] Setting up /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000 for IP: 192.168.105.2
	I0913 16:26:27.954396    1961 certs.go:194] generating shared ca certs ...
	I0913 16:26:27.954405    1961 certs.go:226] acquiring lock for ca certs: {Name:mka1fd556c9b3f29c4a4f622bab1c9ab3ca42c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:27.954608    1961 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key
	I0913 16:26:28.020543    1961 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt ...
	I0913 16:26:28.020553    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt: {Name:mk0a04c4081d6949d8970f4c822dbfa8d54301d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.020861    1961 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key ...
	I0913 16:26:28.020866    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key: {Name:mk5b786aac6838b0bd21025866eaaf022729a43f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.021006    1961 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key
	I0913 16:26:28.070951    1961 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.crt ...
	I0913 16:26:28.070955    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.crt: {Name:mkcf2efd1b2d5af4f7d1d739e846bbfcc9c6c542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.071089    1961 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key ...
	I0913 16:26:28.071092    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key: {Name:mk18e429016672875a792aa3f106e57b253d0d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.071229    1961 certs.go:256] generating profile certs ...
	I0913 16:26:28.071264    1961 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.key
	I0913 16:26:28.071271    1961 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt with IP's: []
	I0913 16:26:28.167395    1961 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt ...
	I0913 16:26:28.167402    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: {Name:mk92c958c732b0e315bcb6030482bf29ec6dfd3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.167593    1961 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.key ...
	I0913 16:26:28.167598    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.key: {Name:mked01968521ccb54d7a06f582e07b5494717835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.167727    1961 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.key.8889fa2d
	I0913 16:26:28.167741    1961 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.crt.8889fa2d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0913 16:26:28.289964    1961 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.crt.8889fa2d ...
	I0913 16:26:28.289968    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.crt.8889fa2d: {Name:mk0e0baa0e16e47c03f6fbc54f50fd710674d016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.290132    1961 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.key.8889fa2d ...
	I0913 16:26:28.290136    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.key.8889fa2d: {Name:mk508d748f6223eabb70161fb27e4b54b899a662 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.290257    1961 certs.go:381] copying /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.crt.8889fa2d -> /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.crt
	I0913 16:26:28.290396    1961 certs.go:385] copying /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.key.8889fa2d -> /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.key
	I0913 16:26:28.290492    1961 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/proxy-client.key
	I0913 16:26:28.290501    1961 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/proxy-client.crt with IP's: []
	I0913 16:26:28.362007    1961 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/proxy-client.crt ...
	I0913 16:26:28.362011    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/proxy-client.crt: {Name:mk4fd993aaefb4c21ba6988863392f6ddc4034b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.362159    1961 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/proxy-client.key ...
	I0913 16:26:28.362161    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/proxy-client.key: {Name:mke79e2af1905044f23a028ada30d653f82046c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:28.362435    1961 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 16:26:28.362465    1961 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem (1078 bytes)
	I0913 16:26:28.362492    1961 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem (1123 bytes)
	I0913 16:26:28.362512    1961 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem (1679 bytes)
	I0913 16:26:28.362969    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 16:26:28.372128    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 16:26:28.380430    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 16:26:28.388462    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 16:26:28.396530    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 16:26:28.404823    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0913 16:26:28.413078    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 16:26:28.421143    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 16:26:28.429672    1961 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 16:26:28.437818    1961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 16:26:28.444293    1961 ssh_runner.go:195] Run: openssl version
	I0913 16:26:28.446766    1961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 16:26:28.450262    1961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 16:26:28.451738    1961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0913 16:26:28.451763    1961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 16:26:28.453923    1961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 16:26:28.457409    1961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 16:26:28.458801    1961 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 16:26:28.458846    1961 kubeadm.go:392] StartCluster: {Name:addons-979000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-979000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 16:26:28.458925    1961 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 16:26:28.463866    1961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 16:26:28.467726    1961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 16:26:28.471541    1961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 16:26:28.475473    1961 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 16:26:28.475478    1961 kubeadm.go:157] found existing configuration files:
	
	I0913 16:26:28.475507    1961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 16:26:28.479301    1961 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 16:26:28.479330    1961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 16:26:28.482742    1961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 16:26:28.485940    1961 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 16:26:28.485968    1961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 16:26:28.489226    1961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 16:26:28.492483    1961 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 16:26:28.492510    1961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 16:26:28.496040    1961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 16:26:28.499720    1961 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 16:26:28.499750    1961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 16:26:28.503421    1961 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 16:26:28.523891    1961 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 16:26:28.523919    1961 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 16:26:28.562929    1961 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 16:26:28.563014    1961 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 16:26:28.563075    1961 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 16:26:28.567361    1961 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 16:26:28.578757    1961 out.go:235]   - Generating certificates and keys ...
	I0913 16:26:28.578791    1961 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 16:26:28.578825    1961 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 16:26:28.654819    1961 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 16:26:28.715363    1961 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 16:26:28.797336    1961 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 16:26:28.968991    1961 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 16:26:29.122103    1961 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 16:26:29.122177    1961 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-979000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0913 16:26:29.285644    1961 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 16:26:29.285714    1961 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-979000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0913 16:26:29.345348    1961 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 16:26:29.519081    1961 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 16:26:29.589033    1961 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 16:26:29.589064    1961 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 16:26:29.685779    1961 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 16:26:29.773998    1961 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 16:26:29.948557    1961 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 16:26:29.985885    1961 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 16:26:30.047134    1961 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 16:26:30.047393    1961 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 16:26:30.048666    1961 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 16:26:30.056140    1961 out.go:235]   - Booting up control plane ...
	I0913 16:26:30.056202    1961 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 16:26:30.056263    1961 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 16:26:30.056310    1961 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 16:26:30.059516    1961 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 16:26:30.061836    1961 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 16:26:30.061866    1961 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 16:26:30.157687    1961 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 16:26:30.157752    1961 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 16:26:30.660735    1961 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.773625ms
	I0913 16:26:30.660898    1961 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 16:26:33.661723    1961 kubeadm.go:310] [api-check] The API server is healthy after 3.00130946s
	I0913 16:26:33.670267    1961 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 16:26:33.675564    1961 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 16:26:33.684220    1961 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 16:26:33.684335    1961 kubeadm.go:310] [mark-control-plane] Marking the node addons-979000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 16:26:33.687751    1961 kubeadm.go:310] [bootstrap-token] Using token: wagve0.566ddv0qyy038zjr
	I0913 16:26:33.694093    1961 out.go:235]   - Configuring RBAC rules ...
	I0913 16:26:33.694153    1961 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 16:26:33.695071    1961 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 16:26:33.700581    1961 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 16:26:33.701487    1961 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 16:26:33.702390    1961 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 16:26:33.703627    1961 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 16:26:34.084248    1961 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 16:26:34.474424    1961 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 16:26:35.066248    1961 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 16:26:35.067100    1961 kubeadm.go:310] 
	I0913 16:26:35.067170    1961 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 16:26:35.067181    1961 kubeadm.go:310] 
	I0913 16:26:35.067288    1961 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 16:26:35.067305    1961 kubeadm.go:310] 
	I0913 16:26:35.067337    1961 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 16:26:35.067407    1961 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 16:26:35.067461    1961 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 16:26:35.067470    1961 kubeadm.go:310] 
	I0913 16:26:35.067532    1961 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 16:26:35.067545    1961 kubeadm.go:310] 
	I0913 16:26:35.067598    1961 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 16:26:35.067604    1961 kubeadm.go:310] 
	I0913 16:26:35.067677    1961 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 16:26:35.067793    1961 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 16:26:35.067876    1961 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 16:26:35.067889    1961 kubeadm.go:310] 
	I0913 16:26:35.067988    1961 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 16:26:35.068074    1961 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 16:26:35.068087    1961 kubeadm.go:310] 
	I0913 16:26:35.068177    1961 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wagve0.566ddv0qyy038zjr \
	I0913 16:26:35.068281    1961 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 \
	I0913 16:26:35.068309    1961 kubeadm.go:310] 	--control-plane 
	I0913 16:26:35.068313    1961 kubeadm.go:310] 
	I0913 16:26:35.068415    1961 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 16:26:35.068431    1961 kubeadm.go:310] 
	I0913 16:26:35.068515    1961 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wagve0.566ddv0qyy038zjr \
	I0913 16:26:35.068624    1961 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 
	I0913 16:26:35.068974    1961 kubeadm.go:310] W0913 23:26:28.749364    1579 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 16:26:35.069302    1961 kubeadm.go:310] W0913 23:26:28.750150    1579 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 16:26:35.069432    1961 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 16:26:35.069447    1961 cni.go:84] Creating CNI manager for ""
	I0913 16:26:35.069463    1961 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 16:26:35.073879    1961 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 16:26:35.077883    1961 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 16:26:35.085589    1961 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 16:26:35.096331    1961 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 16:26:35.096425    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:35.096503    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-979000 minikube.k8s.io/updated_at=2024_09_13T16_26_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-979000 minikube.k8s.io/primary=true
	I0913 16:26:35.160046    1961 ops.go:34] apiserver oom_adj: -16
	I0913 16:26:35.160094    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:35.662225    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:36.160837    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:36.662274    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:37.160459    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:37.662170    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:38.162207    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:38.662439    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:39.162165    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:39.662425    1961 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 16:26:39.748186    1961 kubeadm.go:1113] duration metric: took 4.65190775s to wait for elevateKubeSystemPrivileges
	I0913 16:26:39.748205    1961 kubeadm.go:394] duration metric: took 11.289565833s to StartCluster
	I0913 16:26:39.748218    1961 settings.go:142] acquiring lock: {Name:mk948e653988f014de7183ca44ad61265c2dc06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:39.748403    1961 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 16:26:39.748669    1961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/kubeconfig: {Name:mke2b016812cedc34ffbfc79dbc5c22d8c43c377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:26:39.748937    1961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 16:26:39.748962    1961 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 16:26:39.749055    1961 config.go:182] Loaded profile config "addons-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 16:26:39.749011    1961 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 16:26:39.749089    1961 addons.go:69] Setting yakd=true in profile "addons-979000"
	I0913 16:26:39.749093    1961 addons.go:69] Setting default-storageclass=true in profile "addons-979000"
	I0913 16:26:39.749098    1961 addons.go:234] Setting addon yakd=true in "addons-979000"
	I0913 16:26:39.749099    1961 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-979000"
	I0913 16:26:39.749111    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749130    1961 addons.go:69] Setting inspektor-gadget=true in profile "addons-979000"
	I0913 16:26:39.749141    1961 addons.go:234] Setting addon inspektor-gadget=true in "addons-979000"
	I0913 16:26:39.749156    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749190    1961 addons.go:69] Setting ingress=true in profile "addons-979000"
	I0913 16:26:39.749200    1961 addons.go:234] Setting addon ingress=true in "addons-979000"
	I0913 16:26:39.749217    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749224    1961 addons.go:69] Setting gcp-auth=true in profile "addons-979000"
	I0913 16:26:39.749235    1961 mustload.go:65] Loading cluster: addons-979000
	I0913 16:26:39.749293    1961 addons.go:69] Setting cloud-spanner=true in profile "addons-979000"
	I0913 16:26:39.749308    1961 config.go:182] Loaded profile config "addons-979000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 16:26:39.749311    1961 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-979000"
	I0913 16:26:39.749315    1961 addons.go:234] Setting addon cloud-spanner=true in "addons-979000"
	I0913 16:26:39.749321    1961 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-979000"
	I0913 16:26:39.749428    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749461    1961 retry.go:31] will retry after 950.370229ms: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.749471    1961 addons.go:69] Setting ingress-dns=true in profile "addons-979000"
	I0913 16:26:39.749475    1961 addons.go:234] Setting addon ingress-dns=true in "addons-979000"
	I0913 16:26:39.749485    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749178    1961 addons.go:69] Setting storage-provisioner=true in profile "addons-979000"
	I0913 16:26:39.749546    1961 addons.go:234] Setting addon storage-provisioner=true in "addons-979000"
	I0913 16:26:39.749550    1961 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-979000"
	I0913 16:26:39.749555    1961 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-979000"
	I0913 16:26:39.749557    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749631    1961 retry.go:31] will retry after 886.549971ms: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.749650    1961 retry.go:31] will retry after 1.047206842s: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.749720    1961 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-979000"
	I0913 16:26:39.749725    1961 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-979000"
	I0913 16:26:39.749729    1961 retry.go:31] will retry after 576.256056ms: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.749735    1961 addons.go:69] Setting metrics-server=true in profile "addons-979000"
	I0913 16:26:39.749732    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749740    1961 addons.go:234] Setting addon metrics-server=true in "addons-979000"
	I0913 16:26:39.749746    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749787    1961 addons.go:69] Setting volcano=true in profile "addons-979000"
	I0913 16:26:39.749813    1961 retry.go:31] will retry after 1.363089169s: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.749821    1961 addons.go:69] Setting registry=true in profile "addons-979000"
	I0913 16:26:39.749825    1961 addons.go:234] Setting addon registry=true in "addons-979000"
	I0913 16:26:39.749824    1961 addons.go:234] Setting addon volcano=true in "addons-979000"
	I0913 16:26:39.749834    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749857    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749931    1961 retry.go:31] will retry after 734.055106ms: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.749940    1961 retry.go:31] will retry after 927.499107ms: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.749951    1961 retry.go:31] will retry after 1.17147658s: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.749953    1961 addons.go:69] Setting volumesnapshots=true in profile "addons-979000"
	I0913 16:26:39.749962    1961 retry.go:31] will retry after 1.404103847s: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.749966    1961 addons.go:234] Setting addon volumesnapshots=true in "addons-979000"
	I0913 16:26:39.749996    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.749348    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.750043    1961 retry.go:31] will retry after 1.205173457s: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.750077    1961 retry.go:31] will retry after 1.484383807s: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.750207    1961 retry.go:31] will retry after 1.001371419s: connect: dial unix /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/monitor: connect: connection refused
	I0913 16:26:39.751620    1961 addons.go:234] Setting addon default-storageclass=true in "addons-979000"
	I0913 16:26:39.753608    1961 out.go:177] * Verifying Kubernetes components...
	I0913 16:26:39.753876    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:39.758393    1961 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 16:26:39.758400    1961 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 16:26:39.758406    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:39.760574    1961 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 16:26:39.760577    1961 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 16:26:39.764578    1961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 16:26:39.768542    1961 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 16:26:39.768555    1961 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 16:26:39.768559    1961 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 16:26:39.768580    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:39.768561    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 16:26:39.768653    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:39.807985    1961 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 16:26:39.879720    1961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 16:26:39.889395    1961 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 16:26:39.889407    1961 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 16:26:39.893871    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 16:26:39.900976    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 16:26:39.904915    1961 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 16:26:39.904924    1961 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 16:26:39.927059    1961 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 16:26:39.927072    1961 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 16:26:39.958108    1961 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 16:26:39.958121    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 16:26:39.993857    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 16:26:40.189448    1961 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0913 16:26:40.189916    1961 node_ready.go:35] waiting up to 6m0s for node "addons-979000" to be "Ready" ...
	I0913 16:26:40.207780    1961 node_ready.go:49] node "addons-979000" has status "Ready":"True"
	I0913 16:26:40.207801    1961 node_ready.go:38] duration metric: took 17.864375ms for node "addons-979000" to be "Ready" ...
	I0913 16:26:40.207806    1961 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 16:26:40.220878    1961 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9nqlt" in "kube-system" namespace to be "Ready" ...
	I0913 16:26:40.317541    1961 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-979000 service yakd-dashboard -n yakd-dashboard
	
	I0913 16:26:40.333559    1961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 16:26:40.343448    1961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 16:26:40.353530    1961 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 16:26:40.357627    1961 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 16:26:40.361589    1961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 16:26:40.365553    1961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 16:26:40.369568    1961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 16:26:40.373540    1961 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 16:26:40.377561    1961 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 16:26:40.377569    1961 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 16:26:40.377578    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:40.409008    1961 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 16:26:40.409018    1961 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 16:26:40.415039    1961 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 16:26:40.415050    1961 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 16:26:40.421000    1961 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 16:26:40.421007    1961 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 16:26:40.428508    1961 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 16:26:40.428515    1961 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 16:26:40.434418    1961 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 16:26:40.434426    1961 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 16:26:40.440010    1961 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 16:26:40.440016    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 16:26:40.446310    1961 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 16:26:40.446316    1961 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 16:26:40.452116    1961 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 16:26:40.452122    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 16:26:40.458424    1961 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 16:26:40.458430    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 16:26:40.464372    1961 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 16:26:40.464380    1961 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 16:26:40.470529    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 16:26:40.487282    1961 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-979000"
	I0913 16:26:40.487362    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:40.491511    1961 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 16:26:40.495472    1961 out.go:177]   - Using image docker.io/busybox:stable
	I0913 16:26:40.499473    1961 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 16:26:40.499480    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 16:26:40.499490    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:40.541944    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 16:26:40.642465    1961 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 16:26:40.656535    1961 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 16:26:40.664762    1961 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 16:26:40.666028    1961 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 16:26:40.666035    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 16:26:40.666045    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:40.683548    1961 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 16:26:40.686511    1961 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 16:26:40.686519    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 16:26:40.686530    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:40.692364    1961 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-979000" context rescaled to 1 replicas
	I0913 16:26:40.704488    1961 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 16:26:40.708480    1961 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 16:26:40.708494    1961 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 16:26:40.708507    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:40.755455    1961 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 16:26:40.759490    1961 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 16:26:40.759501    1961 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 16:26:40.759512    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:40.781356    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 16:26:40.798003    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:40.825686    1961 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 16:26:40.825698    1961 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 16:26:40.833432    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 16:26:40.857582    1961 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 16:26:40.857594    1961 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 16:26:40.874391    1961 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 16:26:40.874407    1961 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 16:26:40.878586    1961 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 16:26:40.878596    1961 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 16:26:40.918748    1961 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 16:26:40.918760    1961 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 16:26:40.926655    1961 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 16:26:40.930709    1961 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 16:26:40.930719    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 16:26:40.930729    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:40.930988    1961 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 16:26:40.930993    1961 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 16:26:40.957738    1961 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 16:26:40.965504    1961 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 16:26:40.969922    1961 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 16:26:40.969934    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 16:26:40.969945    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:40.991883    1961 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 16:26:40.991899    1961 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 16:26:41.011483    1961 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 16:26:41.011495    1961 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 16:26:41.045605    1961 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 16:26:41.045618    1961 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 16:26:41.063900    1961 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 16:26:41.063913    1961 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 16:26:41.071490    1961 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 16:26:41.071501    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 16:26:41.102448    1961 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 16:26:41.102462    1961 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 16:26:41.109600    1961 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 16:26:41.109612    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 16:26:41.112274    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 16:26:41.117572    1961 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 16:26:41.121567    1961 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 16:26:41.121578    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 16:26:41.121591    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:41.121884    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 16:26:41.139812    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 16:26:41.158550    1961 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 16:26:41.161431    1961 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 16:26:41.161441    1961 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 16:26:41.161453    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:41.161767    1961 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 16:26:41.161776    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 16:26:41.239554    1961 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0913 16:26:41.246586    1961 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0913 16:26:41.253622    1961 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0913 16:26:41.259854    1961 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 16:26:41.259864    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0913 16:26:41.259874    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:41.278878    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 16:26:41.323140    1961 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 16:26:41.323150    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 16:26:41.325457    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 16:26:41.430311    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 16:26:41.474522    1961 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 16:26:41.474536    1961 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 16:26:41.688890    1961 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 16:26:41.688906    1961 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 16:26:41.723714    1961 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-9nqlt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9nqlt" not found
	I0913 16:26:41.723725    1961 pod_ready.go:82] duration metric: took 1.5028625s for pod "coredns-7c65d6cfc9-9nqlt" in "kube-system" namespace to be "Ready" ...
	E0913 16:26:41.723731    1961 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-9nqlt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-9nqlt" not found
	I0913 16:26:41.723734    1961 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace to be "Ready" ...
	I0913 16:26:41.859905    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 16:26:43.676690    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.206201625s)
	I0913 16:26:43.676700    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.134800667s)
	I0913 16:26:43.676725    1961 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-979000"
	I0913 16:26:43.681545    1961 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 16:26:43.693974    1961 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 16:26:43.705015    1961 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 16:26:43.705025    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:43.734220    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:26:43.846027    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.012632209s)
	I0913 16:26:43.846031    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.06471825s)
	I0913 16:26:43.846055    1961 addons.go:475] Verifying addon ingress=true in "addons-979000"
	I0913 16:26:43.846125    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.724283208s)
	I0913 16:26:43.846176    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.706401708s)
	W0913 16:26:43.846189    1961 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 16:26:43.846201    1961 retry.go:31] will retry after 147.838109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 16:26:43.846098    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.733861083s)
	I0913 16:26:43.846243    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.5673975s)
	I0913 16:26:43.846253    1961 addons.go:475] Verifying addon registry=true in "addons-979000"
	I0913 16:26:43.846340    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.52092075s)
	I0913 16:26:43.852626    1961 out.go:177] * Verifying ingress addon...
	I0913 16:26:43.860498    1961 out.go:177] * Verifying registry addon...
	I0913 16:26:43.868155    1961 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 16:26:43.870851    1961 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 16:26:43.876726    1961 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 16:26:43.876735    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:43.878314    1961 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 16:26:43.878321    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:43.996165    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 16:26:44.258835    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:44.389161    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:44.389288    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:44.713043    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:44.884358    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:44.884436    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:44.990750    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.560486166s)
	I0913 16:26:44.990785    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.130919625s)
	I0913 16:26:44.990795    1961 addons.go:475] Verifying addon metrics-server=true in "addons-979000"
	I0913 16:26:45.227493    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:45.372508    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:45.372594    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:45.698900    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:45.872463    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:45.873129    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:46.203421    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:46.230821    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:26:46.381871    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:46.382083    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:46.595123    1961 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.59898325s)
	I0913 16:26:46.698682    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:46.872975    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:46.873563    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:47.198877    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:47.373155    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:47.373440    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:47.698460    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:47.872346    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:47.872415    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:48.198439    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:48.372327    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:48.372492    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:48.403988    1961 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 16:26:48.404007    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:48.436071    1961 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 16:26:48.442152    1961 addons.go:234] Setting addon gcp-auth=true in "addons-979000"
	I0913 16:26:48.442182    1961 host.go:66] Checking if "addons-979000" exists ...
	I0913 16:26:48.442953    1961 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 16:26:48.442960    1961 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/addons-979000/id_rsa Username:docker}
	I0913 16:26:48.474627    1961 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 16:26:48.478558    1961 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 16:26:48.481477    1961 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 16:26:48.481485    1961 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 16:26:48.488999    1961 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 16:26:48.489010    1961 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 16:26:48.496965    1961 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 16:26:48.496974    1961 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 16:26:48.514512    1961 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 16:26:48.698429    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:48.728800    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:26:48.874955    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:48.903635    1961 addons.go:475] Verifying addon gcp-auth=true in "addons-979000"
	I0913 16:26:48.909055    1961 out.go:177] * Verifying gcp-auth addon...
	I0913 16:26:48.915383    1961 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 16:26:48.972925    1961 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 16:26:48.973171    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:49.199376    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:49.373455    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:49.373545    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:49.698929    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:49.873479    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:49.874590    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:50.199828    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:50.371956    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:50.372289    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:50.698305    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:50.728989    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:26:50.872381    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:50.872491    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:51.198711    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:51.372814    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:51.373144    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:51.699093    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:51.873117    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:51.873148    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:52.198692    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:52.372350    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:52.372828    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:52.699787    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:52.872211    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:52.872274    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:53.196638    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:53.228264    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:26:53.372270    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:53.372516    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:53.698187    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:53.872407    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:53.872443    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:54.198419    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:54.372324    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:54.372739    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:54.697014    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:54.872216    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:54.872278    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:55.198313    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:55.372220    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:55.372539    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:55.698169    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:55.727815    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:26:55.872240    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:55.872326    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:56.198356    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:56.373125    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:56.373212    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:56.698350    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:56.872563    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:56.873280    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:57.197871    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:57.373400    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:57.374361    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:57.697839    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:57.727942    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:26:57.872449    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:57.872741    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:58.197493    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:58.371437    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:58.371966    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:58.697965    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:58.872061    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:58.872314    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:59.198499    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:59.371345    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:26:59.372042    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:59.696588    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:26:59.777390    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:26:59.871968    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:26:59.872302    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:00.198230    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:00.372142    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:00.372617    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:00.698027    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:00.871983    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:00.872444    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:01.198289    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:01.371996    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:01.372217    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:01.697981    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:01.872291    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:01.872620    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:02.198203    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:02.227924    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:27:02.372192    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:02.373012    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:02.698304    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:02.876004    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:02.876306    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:03.199070    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:03.373689    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:03.375161    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:03.698029    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:03.872926    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:03.872989    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:04.198043    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:04.371421    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:04.372225    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:04.698897    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:04.728874    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:27:04.873937    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:04.874514    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:05.198350    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:05.371421    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:05.372204    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:05.697991    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:05.870475    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:05.871869    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:06.198626    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:06.371320    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:06.372342    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:06.697792    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:06.871978    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:06.872067    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:07.197957    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:07.228311    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:27:07.371855    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:07.372294    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:07.697701    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:07.872152    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:07.872548    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:08.197812    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:08.371930    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:08.372317    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:08.697995    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:08.872296    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 16:27:08.872416    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:09.199194    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:09.372887    1961 kapi.go:107] duration metric: took 25.50249475s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 16:27:09.372991    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:09.698219    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:09.727645    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:27:09.874171    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:10.198106    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:10.372057    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:10.697793    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:10.872327    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:11.198816    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:11.370701    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:11.697529    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:11.727746    1961 pod_ready.go:103] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"False"
	I0913 16:27:11.871520    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:12.197767    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:12.371780    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:12.697523    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:12.871669    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:13.197745    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:13.371685    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:13.697937    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:13.727506    1961 pod_ready.go:93] pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace has status "Ready":"True"
	I0913 16:27:13.727514    1961 pod_ready.go:82] duration metric: took 32.004359459s for pod "coredns-7c65d6cfc9-pgx28" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.727519    1961 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-979000" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.729651    1961 pod_ready.go:93] pod "etcd-addons-979000" in "kube-system" namespace has status "Ready":"True"
	I0913 16:27:13.729659    1961 pod_ready.go:82] duration metric: took 2.137666ms for pod "etcd-addons-979000" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.729663    1961 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-979000" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.731566    1961 pod_ready.go:93] pod "kube-apiserver-addons-979000" in "kube-system" namespace has status "Ready":"True"
	I0913 16:27:13.731573    1961 pod_ready.go:82] duration metric: took 1.907209ms for pod "kube-apiserver-addons-979000" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.731577    1961 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-979000" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.733738    1961 pod_ready.go:93] pod "kube-controller-manager-addons-979000" in "kube-system" namespace has status "Ready":"True"
	I0913 16:27:13.733746    1961 pod_ready.go:82] duration metric: took 2.166375ms for pod "kube-controller-manager-addons-979000" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.733750    1961 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lb8xl" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.735694    1961 pod_ready.go:93] pod "kube-proxy-lb8xl" in "kube-system" namespace has status "Ready":"True"
	I0913 16:27:13.735698    1961 pod_ready.go:82] duration metric: took 1.946292ms for pod "kube-proxy-lb8xl" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.735702    1961 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-979000" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:13.871705    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:14.127448    1961 pod_ready.go:93] pod "kube-scheduler-addons-979000" in "kube-system" namespace has status "Ready":"True"
	I0913 16:27:14.127459    1961 pod_ready.go:82] duration metric: took 391.76025ms for pod "kube-scheduler-addons-979000" in "kube-system" namespace to be "Ready" ...
	I0913 16:27:14.127462    1961 pod_ready.go:39] duration metric: took 33.9202685s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 16:27:14.127472    1961 api_server.go:52] waiting for apiserver process to appear ...
	I0913 16:27:14.127539    1961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 16:27:14.135031    1961 api_server.go:72] duration metric: took 34.386675958s to wait for apiserver process to appear ...
	I0913 16:27:14.135043    1961 api_server.go:88] waiting for apiserver healthz status ...
	I0913 16:27:14.135054    1961 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0913 16:27:14.137676    1961 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0913 16:27:14.138152    1961 api_server.go:141] control plane version: v1.31.1
	I0913 16:27:14.138159    1961 api_server.go:131] duration metric: took 3.113334ms to wait for apiserver health ...
	I0913 16:27:14.138162    1961 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 16:27:14.197577    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:14.332220    1961 system_pods.go:59] 17 kube-system pods found
	I0913 16:27:14.332230    1961 system_pods.go:61] "coredns-7c65d6cfc9-pgx28" [9bf54fc6-89d3-4108-8085-be3358f15b17] Running
	I0913 16:27:14.332235    1961 system_pods.go:61] "csi-hostpath-attacher-0" [7d8839d4-0c8f-4dfe-9434-af2a03fbab79] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 16:27:14.332238    1961 system_pods.go:61] "csi-hostpath-resizer-0" [4430e1dc-f410-4533-a0e3-c73e6c613a7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 16:27:14.332241    1961 system_pods.go:61] "csi-hostpathplugin-bsb6w" [f514ebe2-a194-43d9-8032-8def4acf167c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 16:27:14.332243    1961 system_pods.go:61] "etcd-addons-979000" [6cd9ac89-54c3-4624-a4da-effadf56300e] Running
	I0913 16:27:14.332245    1961 system_pods.go:61] "kube-apiserver-addons-979000" [dc0142ab-52f0-4edb-9f1f-130cf54c40e0] Running
	I0913 16:27:14.332247    1961 system_pods.go:61] "kube-controller-manager-addons-979000" [ce30f020-e23a-4dc0-b9d1-40232771caa1] Running
	I0913 16:27:14.332250    1961 system_pods.go:61] "kube-ingress-dns-minikube" [4f0da886-403d-47e0-8118-8b07cbd74156] Running
	I0913 16:27:14.332251    1961 system_pods.go:61] "kube-proxy-lb8xl" [fe32b56f-7eba-4008-af7e-8f71d30d33f6] Running
	I0913 16:27:14.332253    1961 system_pods.go:61] "kube-scheduler-addons-979000" [07636878-abdc-4420-bc91-228092d091d2] Running
	I0913 16:27:14.332256    1961 system_pods.go:61] "metrics-server-84c5f94fbc-rwttn" [ce61eae0-27a9-429c-a37b-50d96bcabb99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 16:27:14.332258    1961 system_pods.go:61] "nvidia-device-plugin-daemonset-g7fjk" [f7e430eb-3175-4af5-b895-1abcccee28a5] Running
	I0913 16:27:14.332260    1961 system_pods.go:61] "registry-66c9cd494c-b2tw6" [f3db3871-9bfc-4a43-96ef-55856578d904] Running
	I0913 16:27:14.332262    1961 system_pods.go:61] "registry-proxy-nvbdn" [cd8d98f1-6d88-45e6-a0e4-d8808da7a54f] Running
	I0913 16:27:14.332265    1961 system_pods.go:61] "snapshot-controller-56fcc65765-cs66g" [9a24c606-83c1-4dce-a264-9d6e5954162a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 16:27:14.332268    1961 system_pods.go:61] "snapshot-controller-56fcc65765-hxhxq" [28560287-757c-43fb-96c2-a7cea2bfb702] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 16:27:14.332270    1961 system_pods.go:61] "storage-provisioner" [faf08cee-2382-4b0e-b21c-7413e0d7daae] Running
	I0913 16:27:14.332273    1961 system_pods.go:74] duration metric: took 194.111583ms to wait for pod list to return data ...
	I0913 16:27:14.332278    1961 default_sa.go:34] waiting for default service account to be created ...
	I0913 16:27:14.371764    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:14.528484    1961 default_sa.go:45] found service account: "default"
	I0913 16:27:14.528497    1961 default_sa.go:55] duration metric: took 196.219458ms for default service account to be created ...
	I0913 16:27:14.528501    1961 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 16:27:14.697880    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:14.733532    1961 system_pods.go:86] 17 kube-system pods found
	I0913 16:27:14.733543    1961 system_pods.go:89] "coredns-7c65d6cfc9-pgx28" [9bf54fc6-89d3-4108-8085-be3358f15b17] Running
	I0913 16:27:14.733547    1961 system_pods.go:89] "csi-hostpath-attacher-0" [7d8839d4-0c8f-4dfe-9434-af2a03fbab79] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 16:27:14.733550    1961 system_pods.go:89] "csi-hostpath-resizer-0" [4430e1dc-f410-4533-a0e3-c73e6c613a7c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 16:27:14.733554    1961 system_pods.go:89] "csi-hostpathplugin-bsb6w" [f514ebe2-a194-43d9-8032-8def4acf167c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 16:27:14.733556    1961 system_pods.go:89] "etcd-addons-979000" [6cd9ac89-54c3-4624-a4da-effadf56300e] Running
	I0913 16:27:14.733558    1961 system_pods.go:89] "kube-apiserver-addons-979000" [dc0142ab-52f0-4edb-9f1f-130cf54c40e0] Running
	I0913 16:27:14.733560    1961 system_pods.go:89] "kube-controller-manager-addons-979000" [ce30f020-e23a-4dc0-b9d1-40232771caa1] Running
	I0913 16:27:14.733569    1961 system_pods.go:89] "kube-ingress-dns-minikube" [4f0da886-403d-47e0-8118-8b07cbd74156] Running
	I0913 16:27:14.733571    1961 system_pods.go:89] "kube-proxy-lb8xl" [fe32b56f-7eba-4008-af7e-8f71d30d33f6] Running
	I0913 16:27:14.733573    1961 system_pods.go:89] "kube-scheduler-addons-979000" [07636878-abdc-4420-bc91-228092d091d2] Running
	I0913 16:27:14.733576    1961 system_pods.go:89] "metrics-server-84c5f94fbc-rwttn" [ce61eae0-27a9-429c-a37b-50d96bcabb99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 16:27:14.733584    1961 system_pods.go:89] "nvidia-device-plugin-daemonset-g7fjk" [f7e430eb-3175-4af5-b895-1abcccee28a5] Running
	I0913 16:27:14.733587    1961 system_pods.go:89] "registry-66c9cd494c-b2tw6" [f3db3871-9bfc-4a43-96ef-55856578d904] Running
	I0913 16:27:14.733588    1961 system_pods.go:89] "registry-proxy-nvbdn" [cd8d98f1-6d88-45e6-a0e4-d8808da7a54f] Running
	I0913 16:27:14.733591    1961 system_pods.go:89] "snapshot-controller-56fcc65765-cs66g" [9a24c606-83c1-4dce-a264-9d6e5954162a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 16:27:14.733594    1961 system_pods.go:89] "snapshot-controller-56fcc65765-hxhxq" [28560287-757c-43fb-96c2-a7cea2bfb702] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 16:27:14.733595    1961 system_pods.go:89] "storage-provisioner" [faf08cee-2382-4b0e-b21c-7413e0d7daae] Running
	I0913 16:27:14.733598    1961 system_pods.go:126] duration metric: took 205.098792ms to wait for k8s-apps to be running ...
	I0913 16:27:14.733602    1961 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 16:27:14.733664    1961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 16:27:14.740022    1961 system_svc.go:56] duration metric: took 6.416459ms WaitForService to wait for kubelet
	I0913 16:27:14.740035    1961 kubeadm.go:582] duration metric: took 34.99169375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 16:27:14.740045    1961 node_conditions.go:102] verifying NodePressure condition ...
	I0913 16:27:14.871562    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:14.928308    1961 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0913 16:27:14.928316    1961 node_conditions.go:123] node cpu capacity is 2
	I0913 16:27:14.928322    1961 node_conditions.go:105] duration metric: took 188.277583ms to run NodePressure ...
	I0913 16:27:14.928328    1961 start.go:241] waiting for startup goroutines ...
	I0913 16:27:15.197738    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:15.371699    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:15.697647    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:15.871943    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:16.197963    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:16.371950    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:16.697714    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:16.871677    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:17.198446    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:17.371601    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:17.697490    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:17.871745    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:18.197468    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:18.371567    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:18.697563    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:18.872110    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:19.197742    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:19.371633    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:19.697459    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:19.871621    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:20.198231    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:20.372262    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:20.697356    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:20.871345    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:21.197264    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:21.372603    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:21.697326    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:21.872325    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:22.198444    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:22.372021    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:22.697227    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:22.870993    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:23.197422    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:23.371603    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:23.697644    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:23.871633    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:24.197667    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:24.371605    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:24.697450    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:24.871313    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:25.197546    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:25.371890    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:25.698177    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:25.871763    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:26.197793    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:26.372056    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:26.696379    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:26.871522    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:27.197821    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:27.371787    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:27.697483    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:27.872934    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:28.197845    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:28.371895    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:28.698005    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:28.875067    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:29.200253    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:29.371547    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:29.696394    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:29.872045    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:30.197646    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:30.371876    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:30.697526    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:30.871475    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:31.201146    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:31.371318    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:31.697349    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:31.873045    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:32.200846    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:32.372372    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:32.696292    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:32.871195    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:33.201534    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:33.371854    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:33.698293    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:33.873400    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:34.198857    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:34.372108    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:34.698301    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:34.871247    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:35.197351    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:35.372035    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:35.698207    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:35.871489    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:36.197626    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:36.371819    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:36.697020    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:36.871301    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:37.197485    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:37.373045    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:37.697669    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:37.873538    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:38.197071    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:38.372387    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:38.697207    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:38.871441    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:39.205753    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:39.371595    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:39.697951    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:39.872682    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:40.197725    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:40.373148    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:40.697679    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:40.871427    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:41.197383    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:41.370968    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:41.696875    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:41.871105    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:42.197240    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:42.371277    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:42.697460    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:42.872417    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:43.199507    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:43.373449    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:43.696974    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:43.871216    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:44.197709    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 16:27:44.371602    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:44.696312    1961 kapi.go:107] duration metric: took 1m1.003442541s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 16:27:44.874517    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:45.379650    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:45.874893    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:46.379004    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:46.876026    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:47.379814    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:47.869844    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:48.372565    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:48.871271    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:49.371228    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:49.871668    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:50.372398    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:50.871554    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:51.371524    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:51.871023    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:52.371197    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:52.871279    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:53.371259    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:53.871307    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:54.371170    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:54.870959    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:55.371585    1961 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 16:27:55.871393    1961 kapi.go:107] duration metric: took 1m12.004552791s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 16:28:11.418023    1961 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 16:28:11.418045    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:11.922015    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:12.422676    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:12.924670    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:13.423338    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:13.922576    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:14.425239    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:14.919004    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:15.418946    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:15.925146    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:16.424351    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:16.925844    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:17.420731    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:17.918975    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:18.422304    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:18.918716    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:19.422922    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:19.922670    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:20.422933    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:20.923526    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:21.422837    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:21.920196    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:22.419820    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:22.921277    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:23.422354    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:23.922168    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:24.420536    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:24.918639    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:25.417723    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:25.917494    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:26.417979    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:26.919611    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:27.420266    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:27.924143    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:28.422642    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:28.923390    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:29.420668    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:29.923621    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:30.423605    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:30.921886    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:31.420459    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:31.922275    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:32.423666    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:32.923573    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:33.424287    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:33.917646    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:34.419224    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:34.919055    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:35.422937    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:35.918453    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:36.420480    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:36.919708    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:37.422942    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:37.921596    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:38.423708    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:38.923118    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:39.423711    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:39.922914    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:40.421470    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:40.918391    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:41.423904    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:41.923249    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:42.422153    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:42.918428    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:43.422640    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:43.919871    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:44.419908    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:44.920093    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:45.425322    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:45.919623    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:46.423782    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:46.925672    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:47.422316    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:47.919095    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:48.421195    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:48.920697    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:49.424244    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:49.919414    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:50.424079    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:50.923901    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:51.422451    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:51.922161    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:52.416867    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:52.917007    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:53.422154    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:53.917114    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:54.417983    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:54.917124    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:55.421365    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:55.918097    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:56.422168    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:56.917755    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:57.424085    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:57.918441    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:58.424189    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:58.928717    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:59.424479    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:28:59.918536    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:00.422687    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:00.923233    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:01.418211    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:01.923985    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:02.423333    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:02.922713    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:03.422938    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:03.922745    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:04.421630    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:04.918630    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:05.423986    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:05.918773    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:06.424690    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:06.920555    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:07.424391    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:07.923201    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:08.421997    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:08.921173    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:09.421674    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:09.921188    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:10.419199    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:10.921703    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:11.420729    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:11.921891    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:12.420934    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:12.923717    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:13.422334    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:13.925012    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:14.419181    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:14.917466    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:15.416487    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:15.916792    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:16.416472    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:16.917629    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:17.417715    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:17.916562    1961 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 16:29:18.416002    1961 kapi.go:107] duration metric: took 2m29.503340666s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 16:29:18.420581    1961 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-979000 cluster.
	I0913 16:29:18.424568    1961 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 16:29:18.427503    1961 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 16:29:18.430626    1961 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, yakd, storage-provisioner-rancher, ingress-dns, inspektor-gadget, nvidia-device-plugin, storage-provisioner, volcano, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0913 16:29:18.434540    1961 addons.go:510] duration metric: took 2m38.68844525s for enable addons: enabled=[cloud-spanner default-storageclass yakd storage-provisioner-rancher ingress-dns inspektor-gadget nvidia-device-plugin storage-provisioner volcano metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0913 16:29:18.434560    1961 start.go:246] waiting for cluster config update ...
	I0913 16:29:18.434570    1961 start.go:255] writing updated cluster config ...
	I0913 16:29:18.434912    1961 ssh_runner.go:195] Run: rm -f paused
	I0913 16:29:18.590092    1961 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0913 16:29:18.593522    1961 out.go:201] 
	W0913 16:29:18.596547    1961 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0913 16:29:18.600561    1961 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0913 16:29:18.613442    1961 out.go:177] * Done! kubectl is now configured to use "addons-979000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 13 23:39:07 addons-979000 cri-dockerd[1171]: time="2024-09-13T23:39:07Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.644102903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.644136768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.644172093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.644206333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 23:39:07 addons-979000 dockerd[1272]: time="2024-09-13T23:39:07.709318247Z" level=info msg="ignoring event" container=8065ba251c2ba0ca8845604a7cd15a55b1e6bfacfcd9f267b6661bb56424eca9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:07 addons-979000 dockerd[1272]: time="2024-09-13T23:39:07.709640798Z" level=info msg="ignoring event" container=db4a65bb2c7793843e19d2c0248e91f1bc115801921bfb8b65d355c5656cedd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.709715201Z" level=info msg="shim disconnected" id=8065ba251c2ba0ca8845604a7cd15a55b1e6bfacfcd9f267b6661bb56424eca9 namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.709773088Z" level=warning msg="cleaning up after shim disconnected" id=8065ba251c2ba0ca8845604a7cd15a55b1e6bfacfcd9f267b6661bb56424eca9 namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.709790062Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.710095597Z" level=info msg="shim disconnected" id=db4a65bb2c7793843e19d2c0248e91f1bc115801921bfb8b65d355c5656cedd4 namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.710153651Z" level=warning msg="cleaning up after shim disconnected" id=db4a65bb2c7793843e19d2c0248e91f1bc115801921bfb8b65d355c5656cedd4 namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.710172210Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.759375344Z" level=info msg="shim disconnected" id=618e28fe10ef088f310b0b429aeec856753e44a7494ae4232d34537c2f958619 namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1272]: time="2024-09-13T23:39:07.759570818Z" level=info msg="ignoring event" container=618e28fe10ef088f310b0b429aeec856753e44a7494ae4232d34537c2f958619 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.759666490Z" level=warning msg="cleaning up after shim disconnected" id=618e28fe10ef088f310b0b429aeec856753e44a7494ae4232d34537c2f958619 namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.759676458Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1272]: time="2024-09-13T23:39:07.792827261Z" level=info msg="ignoring event" container=85b0c3f60fb2c326eff8c9b99dfadeb3e1214bdb0d04437505237ae4a955498e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.792948082Z" level=info msg="shim disconnected" id=85b0c3f60fb2c326eff8c9b99dfadeb3e1214bdb0d04437505237ae4a955498e namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.793002466Z" level=warning msg="cleaning up after shim disconnected" id=85b0c3f60fb2c326eff8c9b99dfadeb3e1214bdb0d04437505237ae4a955498e namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.793007179Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1272]: time="2024-09-13T23:39:07.856755573Z" level=info msg="ignoring event" container=0faf86136b2257e131b859cb81a4a69779200d5c413bad481f57aef7153c69f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.856763789Z" level=info msg="shim disconnected" id=0faf86136b2257e131b859cb81a4a69779200d5c413bad481f57aef7153c69f5 namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.857147271Z" level=warning msg="cleaning up after shim disconnected" id=0faf86136b2257e131b859cb81a4a69779200d5c413bad481f57aef7153c69f5 namespace=moby
	Sep 13 23:39:07 addons-979000 dockerd[1278]: time="2024-09-13T23:39:07.857163703Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	8065ba251c2ba       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              1 second ago        Exited              helper-pod                 0                   43ce8a0215ebe       helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326
	8677e72f4732b       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  9 seconds ago       Running             hello-world-app            0                   c1667bd65b114       hello-world-app-55bf9c44b4-2cpsg
	1587a09e54a6d       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                18 seconds ago      Running             nginx                      0                   ed486e620803b       nginx
	57bc2d30e2277       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   6b0fc25b69f6b       gcp-auth-89d5ffd79-cgzx9
	46c9405d97301       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   76b8b1ff897cc       ingress-nginx-admission-patch-pd6rp
	ba2999523d535       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   7cec7078e1376       ingress-nginx-admission-create-vjwrk
	618e28fe10ef0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              12 minutes ago      Exited              registry-proxy             0                   0faf86136b225       registry-proxy-nvbdn
	db4a65bb2c779       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                   0                   85b0c3f60fb2c       registry-66c9cd494c-b2tw6
	7dfc538a22f18       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   657fcba26c823       nvidia-device-plugin-daemonset-g7fjk
	886ffa0f35b54       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   d7ca36c95a0f1       local-path-provisioner-86d989889c-b59t8
	e5b05196a21b5       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   3f3b7593d7325       yakd-dashboard-67d98fc6b-2tbjk
	a9c6b2954b418       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   83037b5b3f589       cloud-spanner-emulator-769b77f747-b74dm
	d50e0eca93cb1       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   687b8eb798cc1       storage-provisioner
	2fef24f741cfd       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   629a5ba801bbf       kube-proxy-lb8xl
	6fc61acfc6d50       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   152963f2bac48       coredns-7c65d6cfc9-pgx28
	fa363401952b5       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   3232fe413db25       kube-controller-manager-addons-979000
	00f9750561ff1       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   5c080a3edbca1       kube-scheduler-addons-979000
	1ac6c29e13301       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   6d5e343c74e01       kube-apiserver-addons-979000
	595970063d5b9       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   8ce0b1f838de4       etcd-addons-979000
	
	
	==> coredns [6fc61acfc6d5] <==
	Trace[716950763]: [30.000591548s] [30.000591548s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[506058992]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (13-Sep-2024 23:26:41.634) (total time: 30000ms):
	Trace[506058992]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (23:27:11.634)
	Trace[506058992]: [30.000291445s] [30.000291445s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.8:49354 - 3689 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000131531s
	[INFO] 10.244.0.8:49354 - 2676 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167589s
	[INFO] 10.244.0.8:50913 - 36678 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00011879s
	[INFO] 10.244.0.8:50913 - 23384 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000041678s
	[INFO] 10.244.0.8:41638 - 32214 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000039056s
	[INFO] 10.244.0.8:41638 - 4311 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044676s
	[INFO] 10.244.0.8:60264 - 22535 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028854s
	[INFO] 10.244.0.8:60264 - 53766 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000030895s
	[INFO] 10.244.0.8:45417 - 58769 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000026564s
	[INFO] 10.244.0.8:45417 - 18578 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000023733s
	[INFO] 10.244.0.24:59001 - 30866 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000133674s
	[INFO] 10.244.0.24:47667 - 35924 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00016175s
	[INFO] 10.244.0.24:50101 - 36714 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000033866s
	[INFO] 10.244.0.24:51885 - 31175 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000024535s
	[INFO] 10.244.0.24:46846 - 41033 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000027701s
	[INFO] 10.244.0.24:45266 - 34880 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000031117s
	[INFO] 10.244.0.24:34294 - 49649 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001105131s
	[INFO] 10.244.0.24:37215 - 31312 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002209513s
	
	
	==> describe nodes <==
	Name:               addons-979000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-979000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-979000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T16_26_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-979000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:26:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-979000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:38:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:35:15 +0000   Fri, 13 Sep 2024 23:26:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:35:15 +0000   Fri, 13 Sep 2024 23:26:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:35:15 +0000   Fri, 13 Sep 2024 23:26:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:35:15 +0000   Fri, 13 Sep 2024 23:26:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-979000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 0991175cceab405ab85d09a5fee7062a
	  System UUID:                0991175cceab405ab85d09a5fee7062a
	  Boot ID:                    45d85a64-b518-45cd-b250-84944d42cbf2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  default                     cloud-spanner-emulator-769b77f747-b74dm                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-2cpsg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  default                     registry-test                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gcp-auth                    gcp-auth-89d5ffd79-cgzx9                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-pgx28                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-979000                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-979000                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-979000                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-lb8xl                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-979000                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-g7fjk                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  local-path-storage          local-path-provisioner-86d989889c-b59t8                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-2tbjk                                0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             298Mi (7%)  426Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-979000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-979000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-979000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-979000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-979000 event: Registered Node addons-979000 in Controller
	
	
	==> dmesg <==
	[  +5.953265] kauditd_printk_skb: 69 callbacks suppressed
	[Sep13 23:27] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.607314] kauditd_printk_skb: 13 callbacks suppressed
	[  +7.012315] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.098412] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.003920] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.558530] kauditd_printk_skb: 7 callbacks suppressed
	[ +12.205413] kauditd_printk_skb: 22 callbacks suppressed
	[Sep13 23:28] kauditd_printk_skb: 18 callbacks suppressed
	[ +44.581922] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 23:29] kauditd_printk_skb: 40 callbacks suppressed
	[ +20.423951] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.369685] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.473604] kauditd_printk_skb: 20 callbacks suppressed
	[Sep13 23:30] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 23:33] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 23:37] kauditd_printk_skb: 2 callbacks suppressed
	[Sep13 23:38] kauditd_printk_skb: 7 callbacks suppressed
	[  +9.085432] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.768239] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.579377] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.485311] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.209758] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.856504] kauditd_printk_skb: 7 callbacks suppressed
	[Sep13 23:39] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [595970063d5b] <==
	{"level":"info","ts":"2024-09-13T23:26:31.356872Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T23:26:31.356702Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2024-09-13T23:26:32.012555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-13T23:26:32.012692Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-13T23:26:32.012753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-09-13T23:26:32.012778Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-09-13T23:26:32.012808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-13T23:26:32.012841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-09-13T23:26:32.012858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-09-13T23:26:32.020611Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:26:32.024699Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-979000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T23:26:32.024750Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:26:32.024756Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:26:32.025594Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:26:32.026142Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-09-13T23:26:32.024862Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T23:26:32.026236Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T23:26:32.026705Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:26:32.027323Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T23:26:32.045888Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:26:32.045928Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:26:32.045938Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:36:32.068333Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1824}
	{"level":"info","ts":"2024-09-13T23:36:32.153204Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1824,"took":"82.75266ms","hash":860820744,"current-db-size-bytes":8835072,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4730880,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-09-13T23:36:32.153692Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":860820744,"revision":1824,"compact-revision":-1}
	
	
	==> gcp-auth [57bc2d30e227] <==
	2024/09/13 23:29:18 GCP Auth Webhook started!
	2024/09/13 23:29:33 Ready to marshal response ...
	2024/09/13 23:29:33 Ready to write response ...
	2024/09/13 23:29:34 Ready to marshal response ...
	2024/09/13 23:29:34 Ready to write response ...
	2024/09/13 23:29:56 Ready to marshal response ...
	2024/09/13 23:29:56 Ready to write response ...
	2024/09/13 23:29:56 Ready to marshal response ...
	2024/09/13 23:29:56 Ready to write response ...
	2024/09/13 23:29:56 Ready to marshal response ...
	2024/09/13 23:29:56 Ready to write response ...
	2024/09/13 23:37:58 Ready to marshal response ...
	2024/09/13 23:37:58 Ready to write response ...
	2024/09/13 23:38:07 Ready to marshal response ...
	2024/09/13 23:38:07 Ready to write response ...
	2024/09/13 23:38:16 Ready to marshal response ...
	2024/09/13 23:38:16 Ready to write response ...
	2024/09/13 23:38:47 Ready to marshal response ...
	2024/09/13 23:38:47 Ready to write response ...
	2024/09/13 23:38:57 Ready to marshal response ...
	2024/09/13 23:38:57 Ready to write response ...
	2024/09/13 23:39:05 Ready to marshal response ...
	2024/09/13 23:39:05 Ready to write response ...
	2024/09/13 23:39:05 Ready to marshal response ...
	2024/09/13 23:39:05 Ready to write response ...
	
	
	==> kernel <==
	 23:39:08 up 12 min,  0 users,  load average: 0.32, 0.46, 0.32
	Linux addons-979000 5.10.207 #1 SMP PREEMPT Fri Sep 13 18:07:06 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ac6c29e1330] <==
	W0913 23:29:47.401396       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0913 23:29:47.630348       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0913 23:29:47.655207       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0913 23:29:47.659322       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0913 23:29:47.660891       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0913 23:29:47.817743       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0913 23:38:06.517748       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0913 23:38:31.064686       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:31.064702       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:31.075509       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:31.075531       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:31.087160       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:31.087178       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:31.173291       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:31.173305       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0913 23:38:31.192799       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0913 23:38:31.192812       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0913 23:38:32.173638       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0913 23:38:32.193376       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0913 23:38:32.201874       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0913 23:38:41.873460       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0913 23:38:42.888471       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0913 23:38:47.183382       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0913 23:38:47.284350       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.1.63"}
	I0913 23:38:57.522469       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.154.222"}
	
	
	==> kube-controller-manager [fa363401952b] <==
	W0913 23:38:51.337932       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:51.338066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:38:51.939553       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0913 23:38:51.950935       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:51.951278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:38:57.465026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.96313ms"
	I0913 23:38:57.467993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="2.19589ms"
	I0913 23:38:57.468099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.803µs"
	I0913 23:38:57.469089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.386µs"
	I0913 23:38:58.319929       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0913 23:38:58.321686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="1.96µs"
	I0913 23:38:58.322492       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0913 23:39:00.040898       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:00.041043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:39:00.236173       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="15.828361ms"
	I0913 23:39:00.237092       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="16.558µs"
	W0913 23:39:01.761761       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:01.761788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:39:04.747817       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:04.747857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:39:07.003376       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:07.003426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:39:07.327546       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:07.327606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:39:07.674661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.462µs"
	
	
	==> kube-proxy [2fef24f741cf] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 23:26:42.624797       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 23:26:42.635975       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0913 23:26:42.636079       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:26:42.683492       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 23:26:42.683527       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 23:26:42.683542       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:26:42.686987       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:26:42.687190       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:26:42.687202       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:26:42.688089       1 config.go:199] "Starting service config controller"
	I0913 23:26:42.688102       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:26:42.688114       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:26:42.688116       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:26:42.688299       1 config.go:328] "Starting node config controller"
	I0913 23:26:42.688303       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:26:42.788584       1 shared_informer.go:320] Caches are synced for node config
	I0913 23:26:42.788603       1 shared_informer.go:320] Caches are synced for service config
	I0913 23:26:42.788614       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [00f9750561ff] <==
	W0913 23:26:32.585568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 23:26:32.585573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:32.585598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 23:26:32.585606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:32.585625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 23:26:32.585632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:32.585652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 23:26:32.585656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:32.585677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0913 23:26:32.585685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:32.585701       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 23:26:32.585705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:32.585731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0913 23:26:32.585736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:32.585960       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 23:26:32.585977       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 23:26:33.431532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 23:26:33.432056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:33.522147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0913 23:26:33.522270       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:33.531924       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 23:26:33.532001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 23:26:33.534194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0913 23:26:33.534401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0913 23:26:34.182037       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 23:39:02 addons-979000 kubelet[2036]: E0913 23:39:02.592497    2036 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0c5f8de4-91d0-4a6f-84f9-3f9d61ef8451"
	Sep 13 23:39:02 addons-979000 kubelet[2036]: I0913 23:39:02.601144    2036 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acf05dac-6794-4efe-8953-842dadc38e54" path="/var/lib/kubelet/pods/acf05dac-6794-4efe-8953-842dadc38e54/volumes"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: I0913 23:39:05.691090    2036 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-2cpsg" podStartSLOduration=7.021266651 podStartE2EDuration="8.691079921s" podCreationTimestamp="2024-09-13 23:38:57 +0000 UTC" firstStartedPulling="2024-09-13 23:38:57.874658979 +0000 UTC m=+743.366720720" lastFinishedPulling="2024-09-13 23:38:59.544472291 +0000 UTC m=+745.036533990" observedRunningTime="2024-09-13 23:39:00.219165037 +0000 UTC m=+745.711226819" watchObservedRunningTime="2024-09-13 23:39:05.691079921 +0000 UTC m=+751.183141661"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: E0913 23:39:05.691439    2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81da4cd2-e996-45f3-85f5-d84b3f3bf929" containerName="gadget"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: E0913 23:39:05.691471    2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="81da4cd2-e996-45f3-85f5-d84b3f3bf929" containerName="gadget"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: E0913 23:39:05.691475    2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f0da886-403d-47e0-8118-8b07cbd74156" containerName="minikube-ingress-dns"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: E0913 23:39:05.691479    2036 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="acf05dac-6794-4efe-8953-842dadc38e54" containerName="controller"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: I0913 23:39:05.691498    2036 memory_manager.go:354] "RemoveStaleState removing state" podUID="acf05dac-6794-4efe-8953-842dadc38e54" containerName="controller"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: I0913 23:39:05.691501    2036 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f0da886-403d-47e0-8118-8b07cbd74156" containerName="minikube-ingress-dns"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: I0913 23:39:05.813110    2036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2f8e2865-83f4-4465-af6b-d120426f2a4a-data\") pod \"helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326\" (UID: \"2f8e2865-83f4-4465-af6b-d120426f2a4a\") " pod="local-path-storage/helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: I0913 23:39:05.813151    2036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b5tw\" (UniqueName: \"kubernetes.io/projected/2f8e2865-83f4-4465-af6b-d120426f2a4a-kube-api-access-4b5tw\") pod \"helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326\" (UID: \"2f8e2865-83f4-4465-af6b-d120426f2a4a\") " pod="local-path-storage/helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: I0913 23:39:05.813169    2036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2f8e2865-83f4-4465-af6b-d120426f2a4a-gcp-creds\") pod \"helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326\" (UID: \"2f8e2865-83f4-4465-af6b-d120426f2a4a\") " pod="local-path-storage/helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326"
	Sep 13 23:39:05 addons-979000 kubelet[2036]: I0913 23:39:05.813181    2036 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2f8e2865-83f4-4465-af6b-d120426f2a4a-script\") pod \"helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326\" (UID: \"2f8e2865-83f4-4465-af6b-d120426f2a4a\") " pod="local-path-storage/helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326"
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.725280    2036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9c8405dd-91c3-49f6-b41b-031d4d316b35-gcp-creds\") pod \"9c8405dd-91c3-49f6-b41b-031d4d316b35\" (UID: \"9c8405dd-91c3-49f6-b41b-031d4d316b35\") "
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.725305    2036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t57dn\" (UniqueName: \"kubernetes.io/projected/9c8405dd-91c3-49f6-b41b-031d4d316b35-kube-api-access-t57dn\") pod \"9c8405dd-91c3-49f6-b41b-031d4d316b35\" (UID: \"9c8405dd-91c3-49f6-b41b-031d4d316b35\") "
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.725350    2036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c8405dd-91c3-49f6-b41b-031d4d316b35-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "9c8405dd-91c3-49f6-b41b-031d4d316b35" (UID: "9c8405dd-91c3-49f6-b41b-031d4d316b35"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.725972    2036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c8405dd-91c3-49f6-b41b-031d4d316b35-kube-api-access-t57dn" (OuterVolumeSpecName: "kube-api-access-t57dn") pod "9c8405dd-91c3-49f6-b41b-031d4d316b35" (UID: "9c8405dd-91c3-49f6-b41b-031d4d316b35"). InnerVolumeSpecName "kube-api-access-t57dn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.825858    2036 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t57dn\" (UniqueName: \"kubernetes.io/projected/9c8405dd-91c3-49f6-b41b-031d4d316b35-kube-api-access-t57dn\") on node \"addons-979000\" DevicePath \"\""
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.825872    2036 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9c8405dd-91c3-49f6-b41b-031d4d316b35-gcp-creds\") on node \"addons-979000\" DevicePath \"\""
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.926891    2036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxsrr\" (UniqueName: \"kubernetes.io/projected/f3db3871-9bfc-4a43-96ef-55856578d904-kube-api-access-rxsrr\") pod \"f3db3871-9bfc-4a43-96ef-55856578d904\" (UID: \"f3db3871-9bfc-4a43-96ef-55856578d904\") "
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.926915    2036 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4spnl\" (UniqueName: \"kubernetes.io/projected/cd8d98f1-6d88-45e6-a0e4-d8808da7a54f-kube-api-access-4spnl\") pod \"cd8d98f1-6d88-45e6-a0e4-d8808da7a54f\" (UID: \"cd8d98f1-6d88-45e6-a0e4-d8808da7a54f\") "
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.928675    2036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3db3871-9bfc-4a43-96ef-55856578d904-kube-api-access-rxsrr" (OuterVolumeSpecName: "kube-api-access-rxsrr") pod "f3db3871-9bfc-4a43-96ef-55856578d904" (UID: "f3db3871-9bfc-4a43-96ef-55856578d904"). InnerVolumeSpecName "kube-api-access-rxsrr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:39:07 addons-979000 kubelet[2036]: I0913 23:39:07.928973    2036 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8d98f1-6d88-45e6-a0e4-d8808da7a54f-kube-api-access-4spnl" (OuterVolumeSpecName: "kube-api-access-4spnl") pod "cd8d98f1-6d88-45e6-a0e4-d8808da7a54f" (UID: "cd8d98f1-6d88-45e6-a0e4-d8808da7a54f"). InnerVolumeSpecName "kube-api-access-4spnl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:39:08 addons-979000 kubelet[2036]: I0913 23:39:08.027989    2036 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rxsrr\" (UniqueName: \"kubernetes.io/projected/f3db3871-9bfc-4a43-96ef-55856578d904-kube-api-access-rxsrr\") on node \"addons-979000\" DevicePath \"\""
	Sep 13 23:39:08 addons-979000 kubelet[2036]: I0913 23:39:08.028014    2036 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4spnl\" (UniqueName: \"kubernetes.io/projected/cd8d98f1-6d88-45e6-a0e4-d8808da7a54f-kube-api-access-4spnl\") on node \"addons-979000\" DevicePath \"\""
	
	
	==> storage-provisioner [d50e0eca93cb] <==
	I0913 23:26:44.989050       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 23:26:45.006323       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 23:26:45.006353       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 23:26:45.054105       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 23:26:45.055238       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-979000_6fde0fd9-fe22-4132-82e7-cdc7adaf2b27!
	I0913 23:26:45.055590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"571d7472-36c6-4e3e-b6f8-a177c2406aee", APIVersion:"v1", ResourceVersion:"838", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-979000_6fde0fd9-fe22-4132-82e7-cdc7adaf2b27 became leader
	I0913 23:26:45.157519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-979000_6fde0fd9-fe22-4132-82e7-cdc7adaf2b27!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-979000 -n addons-979000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-979000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox test-local-path helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-979000 describe pod busybox test-local-path helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-979000 describe pod busybox test-local-path helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326: exit status 1 (47.494667ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-979000/192.168.105.2
	Start Time:       Fri, 13 Sep 2024 16:29:56 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2wm4f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2wm4f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m12s                  default-scheduler  Successfully assigned default/busybox to addons-979000
	  Normal   Pulling    7m36s (x4 over 9m12s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m36s (x4 over 9m12s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m36s (x4 over 9m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m21s (x6 over 9m11s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m6s (x20 over 9m11s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dpc67 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-dpc67:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-979000 describe pod busybox test-local-path helper-pod-create-pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.28s)

                                                
                                    
x
+
TestCertOptions (10.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-905000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-905000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.015530667s)

                                                
                                                
-- stdout --
	* [cert-options-905000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-905000" primary control-plane node in "cert-options-905000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-905000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-905000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-905000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-905000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-905000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (79.7245ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-905000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-905000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-905000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-905000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-905000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.766916ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-905000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-905000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-905000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-905000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-13 17:11:07.867732 -0700 PDT m=+2735.371085168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-905000 -n cert-options-905000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-905000 -n cert-options-905000: exit status 7 (30.894042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-905000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-905000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-905000
--- FAIL: TestCertOptions (10.28s)

                                                
                                    
x
+
TestCertExpiration (195.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-955000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-955000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.078487375s)

                                                
                                                
-- stdout --
	* [cert-expiration-955000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-955000" primary control-plane node in "cert-expiration-955000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-955000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-955000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-955000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-955000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-955000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.183592875s)

                                                
                                                
-- stdout --
	* [cert-expiration-955000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-955000" primary control-plane node in "cert-expiration-955000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-955000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-955000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-955000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-955000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-955000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-955000" primary control-plane node in "cert-expiration-955000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-955000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-955000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-955000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-09-13 17:14:07.800342 -0700 PDT m=+2915.306390918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-955000 -n cert-expiration-955000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-955000 -n cert-expiration-955000: exit status 7 (46.805375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-955000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-955000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-955000
--- FAIL: TestCertExpiration (195.39s)

                                                
                                    
x
+
TestDockerFlags (10.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-124000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-124000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.048842459s)

                                                
                                                
-- stdout --
	* [docker-flags-124000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-124000" primary control-plane node in "docker-flags-124000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-124000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:10:47.441040    5024 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:10:47.441162    5024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:10:47.441165    5024 out.go:358] Setting ErrFile to fd 2...
	I0913 17:10:47.441167    5024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:10:47.441274    5024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:10:47.442327    5024 out.go:352] Setting JSON to false
	I0913 17:10:47.458872    5024 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4211,"bootTime":1726268436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:10:47.458948    5024 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:10:47.463163    5024 out.go:177] * [docker-flags-124000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:10:47.473204    5024 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:10:47.473260    5024 notify.go:220] Checking for updates...
	I0913 17:10:47.479133    5024 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:10:47.482090    5024 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:10:47.485138    5024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:10:47.488173    5024 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:10:47.491092    5024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:10:47.494541    5024 config.go:182] Loaded profile config "force-systemd-flag-300000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:10:47.494604    5024 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:10:47.494661    5024 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:10:47.499137    5024 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:10:47.506128    5024 start.go:297] selected driver: qemu2
	I0913 17:10:47.506134    5024 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:10:47.506140    5024 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:10:47.508574    5024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:10:47.512194    5024 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:10:47.516148    5024 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0913 17:10:47.516164    5024 cni.go:84] Creating CNI manager for ""
	I0913 17:10:47.516192    5024 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:10:47.516198    5024 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:10:47.516232    5024 start.go:340] cluster config:
	{Name:docker-flags-124000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-124000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:10:47.520196    5024 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:10:47.528002    5024 out.go:177] * Starting "docker-flags-124000" primary control-plane node in "docker-flags-124000" cluster
	I0913 17:10:47.532125    5024 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:10:47.532141    5024 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:10:47.532154    5024 cache.go:56] Caching tarball of preloaded images
	I0913 17:10:47.532227    5024 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:10:47.532233    5024 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:10:47.532303    5024 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/docker-flags-124000/config.json ...
	I0913 17:10:47.532315    5024 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/docker-flags-124000/config.json: {Name:mk6fe5265674d79aedaf80c63da781a673cc386c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:10:47.532529    5024 start.go:360] acquireMachinesLock for docker-flags-124000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:10:47.532563    5024 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "docker-flags-124000"
	I0913 17:10:47.532574    5024 start.go:93] Provisioning new machine with config: &{Name:docker-flags-124000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-124000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:10:47.532614    5024 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:10:47.541081    5024 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 17:10:47.558221    5024 start.go:159] libmachine.API.Create for "docker-flags-124000" (driver="qemu2")
	I0913 17:10:47.558256    5024 client.go:168] LocalClient.Create starting
	I0913 17:10:47.558322    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:10:47.558350    5024 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:47.558359    5024 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:47.558394    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:10:47.558420    5024 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:47.558427    5024 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:47.558778    5024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:10:47.719669    5024 main.go:141] libmachine: Creating SSH key...
	I0913 17:10:47.903840    5024 main.go:141] libmachine: Creating Disk image...
	I0913 17:10:47.903847    5024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:10:47.904054    5024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2
	I0913 17:10:47.913736    5024 main.go:141] libmachine: STDOUT: 
	I0913 17:10:47.913758    5024 main.go:141] libmachine: STDERR: 
	I0913 17:10:47.913815    5024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2 +20000M
	I0913 17:10:47.921769    5024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:10:47.921783    5024 main.go:141] libmachine: STDERR: 
	I0913 17:10:47.921811    5024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2
	I0913 17:10:47.921816    5024 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:10:47.921829    5024 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:10:47.921861    5024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:d1:36:58:da:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2
	I0913 17:10:47.923507    5024 main.go:141] libmachine: STDOUT: 
	I0913 17:10:47.923519    5024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:10:47.923539    5024 client.go:171] duration metric: took 365.281834ms to LocalClient.Create
	I0913 17:10:49.925695    5024 start.go:128] duration metric: took 2.393094334s to createHost
	I0913 17:10:49.925801    5024 start.go:83] releasing machines lock for "docker-flags-124000", held for 2.393236375s
	W0913 17:10:49.925849    5024 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:49.947745    5024 out.go:177] * Deleting "docker-flags-124000" in qemu2 ...
	W0913 17:10:49.971631    5024 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:49.971651    5024 start.go:729] Will try again in 5 seconds ...
	I0913 17:10:54.973759    5024 start.go:360] acquireMachinesLock for docker-flags-124000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:10:54.974098    5024 start.go:364] duration metric: took 210.416µs to acquireMachinesLock for "docker-flags-124000"
	I0913 17:10:54.974174    5024 start.go:93] Provisioning new machine with config: &{Name:docker-flags-124000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-124000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:10:54.974432    5024 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:10:54.984289    5024 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 17:10:55.021560    5024 start.go:159] libmachine.API.Create for "docker-flags-124000" (driver="qemu2")
	I0913 17:10:55.021603    5024 client.go:168] LocalClient.Create starting
	I0913 17:10:55.021693    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:10:55.021726    5024 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:55.021735    5024 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:55.021778    5024 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:10:55.021804    5024 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:55.021810    5024 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:55.022091    5024 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:10:55.181616    5024 main.go:141] libmachine: Creating SSH key...
	I0913 17:10:55.387989    5024 main.go:141] libmachine: Creating Disk image...
	I0913 17:10:55.388000    5024 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:10:55.388183    5024 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2
	I0913 17:10:55.397926    5024 main.go:141] libmachine: STDOUT: 
	I0913 17:10:55.397955    5024 main.go:141] libmachine: STDERR: 
	I0913 17:10:55.398024    5024 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2 +20000M
	I0913 17:10:55.406227    5024 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:10:55.406241    5024 main.go:141] libmachine: STDERR: 
	I0913 17:10:55.406289    5024 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2
	I0913 17:10:55.406294    5024 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:10:55.406302    5024 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:10:55.406342    5024 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:31:80:62:6a:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/docker-flags-124000/disk.qcow2
	I0913 17:10:55.407950    5024 main.go:141] libmachine: STDOUT: 
	I0913 17:10:55.407962    5024 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:10:55.407976    5024 client.go:171] duration metric: took 386.374541ms to LocalClient.Create
	I0913 17:10:57.410172    5024 start.go:128] duration metric: took 2.435736583s to createHost
	I0913 17:10:57.410273    5024 start.go:83] releasing machines lock for "docker-flags-124000", held for 2.436189375s
	W0913 17:10:57.410724    5024 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-124000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-124000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:57.428445    5024 out.go:201] 
	W0913 17:10:57.432482    5024 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:10:57.432509    5024 out.go:270] * 
	* 
	W0913 17:10:57.435014    5024 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:10:57.448431    5024 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-124000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-124000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-124000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (77.734292ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-124000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-124000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-124000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-124000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-124000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-124000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-124000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-124000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-124000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (44.867959ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-124000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-124000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-124000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-124000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-124000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-124000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-09-13 17:10:57.587006 -0700 PDT m=+2725.090205001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-124000 -n docker-flags-124000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-124000 -n docker-flags-124000: exit status 7 (29.827334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-124000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-124000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-124000
--- FAIL: TestDockerFlags (10.28s)

                                                
                                    
x
+
TestForceSystemdFlag (10.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-300000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-300000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.917681s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-300000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-300000" primary control-plane node in "force-systemd-flag-300000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-300000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:10:42.452337    5000 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:10:42.452460    5000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:10:42.452465    5000 out.go:358] Setting ErrFile to fd 2...
	I0913 17:10:42.452468    5000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:10:42.452600    5000 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:10:42.453867    5000 out.go:352] Setting JSON to false
	I0913 17:10:42.470106    5000 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4206,"bootTime":1726268436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:10:42.470170    5000 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:10:42.476814    5000 out.go:177] * [force-systemd-flag-300000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:10:42.483794    5000 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:10:42.483826    5000 notify.go:220] Checking for updates...
	I0913 17:10:42.492812    5000 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:10:42.495771    5000 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:10:42.498762    5000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:10:42.501806    5000 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:10:42.503171    5000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:10:42.506118    5000 config.go:182] Loaded profile config "force-systemd-env-453000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:10:42.506191    5000 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:10:42.506254    5000 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:10:42.509747    5000 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:10:42.514746    5000 start.go:297] selected driver: qemu2
	I0913 17:10:42.514752    5000 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:10:42.514759    5000 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:10:42.517114    5000 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:10:42.519798    5000 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:10:42.522894    5000 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 17:10:42.522908    5000 cni.go:84] Creating CNI manager for ""
	I0913 17:10:42.522935    5000 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:10:42.522941    5000 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:10:42.522971    5000 start.go:340] cluster config:
	{Name:force-systemd-flag-300000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-300000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:10:42.526818    5000 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:10:42.534832    5000 out.go:177] * Starting "force-systemd-flag-300000" primary control-plane node in "force-systemd-flag-300000" cluster
	I0913 17:10:42.538759    5000 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:10:42.538774    5000 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:10:42.538783    5000 cache.go:56] Caching tarball of preloaded images
	I0913 17:10:42.538842    5000 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:10:42.538848    5000 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:10:42.538913    5000 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/force-systemd-flag-300000/config.json ...
	I0913 17:10:42.538925    5000 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/force-systemd-flag-300000/config.json: {Name:mkce6ee3fb342000ed212a2153b81d8f7df9b9d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:10:42.539318    5000 start.go:360] acquireMachinesLock for force-systemd-flag-300000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:10:42.539355    5000 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "force-systemd-flag-300000"
	I0913 17:10:42.539366    5000 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-300000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-300000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:10:42.539391    5000 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:10:42.545666    5000 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 17:10:42.564169    5000 start.go:159] libmachine.API.Create for "force-systemd-flag-300000" (driver="qemu2")
	I0913 17:10:42.564201    5000 client.go:168] LocalClient.Create starting
	I0913 17:10:42.564275    5000 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:10:42.564312    5000 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:42.564320    5000 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:42.564357    5000 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:10:42.564386    5000 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:42.564394    5000 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:42.564802    5000 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:10:42.723001    5000 main.go:141] libmachine: Creating SSH key...
	I0913 17:10:42.785641    5000 main.go:141] libmachine: Creating Disk image...
	I0913 17:10:42.785646    5000 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:10:42.785812    5000 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2
	I0913 17:10:42.794867    5000 main.go:141] libmachine: STDOUT: 
	I0913 17:10:42.794892    5000 main.go:141] libmachine: STDERR: 
	I0913 17:10:42.794958    5000 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2 +20000M
	I0913 17:10:42.802939    5000 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:10:42.802961    5000 main.go:141] libmachine: STDERR: 
	I0913 17:10:42.802983    5000 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2
	I0913 17:10:42.802989    5000 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:10:42.802999    5000 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:10:42.803030    5000 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:c9:ce:6f:6d:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2
	I0913 17:10:42.804597    5000 main.go:141] libmachine: STDOUT: 
	I0913 17:10:42.804610    5000 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:10:42.804631    5000 client.go:171] duration metric: took 240.426583ms to LocalClient.Create
	I0913 17:10:44.806793    5000 start.go:128] duration metric: took 2.267414334s to createHost
	I0913 17:10:44.806859    5000 start.go:83] releasing machines lock for "force-systemd-flag-300000", held for 2.267527875s
	W0913 17:10:44.806937    5000 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:44.831089    5000 out.go:177] * Deleting "force-systemd-flag-300000" in qemu2 ...
	W0913 17:10:44.858200    5000 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:44.858222    5000 start.go:729] Will try again in 5 seconds ...
	I0913 17:10:49.860469    5000 start.go:360] acquireMachinesLock for force-systemd-flag-300000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:10:49.925925    5000 start.go:364] duration metric: took 65.315ms to acquireMachinesLock for "force-systemd-flag-300000"
	I0913 17:10:49.926066    5000 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-300000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-300000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:10:49.926268    5000 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:10:49.935693    5000 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 17:10:49.984040    5000 start.go:159] libmachine.API.Create for "force-systemd-flag-300000" (driver="qemu2")
	I0913 17:10:49.984083    5000 client.go:168] LocalClient.Create starting
	I0913 17:10:49.984202    5000 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:10:49.984270    5000 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:49.984288    5000 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:49.984374    5000 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:10:49.984418    5000 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:49.984434    5000 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:49.985179    5000 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:10:50.163431    5000 main.go:141] libmachine: Creating SSH key...
	I0913 17:10:50.264686    5000 main.go:141] libmachine: Creating Disk image...
	I0913 17:10:50.264691    5000 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:10:50.264869    5000 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2
	I0913 17:10:50.274126    5000 main.go:141] libmachine: STDOUT: 
	I0913 17:10:50.274141    5000 main.go:141] libmachine: STDERR: 
	I0913 17:10:50.274206    5000 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2 +20000M
	I0913 17:10:50.282028    5000 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:10:50.282043    5000 main.go:141] libmachine: STDERR: 
	I0913 17:10:50.282055    5000 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2
	I0913 17:10:50.282062    5000 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:10:50.282071    5000 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:10:50.282108    5000 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:b5:c0:aa:c2:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-flag-300000/disk.qcow2
	I0913 17:10:50.283734    5000 main.go:141] libmachine: STDOUT: 
	I0913 17:10:50.283747    5000 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:10:50.283759    5000 client.go:171] duration metric: took 299.673125ms to LocalClient.Create
	I0913 17:10:52.285994    5000 start.go:128] duration metric: took 2.359733208s to createHost
	I0913 17:10:52.286048    5000 start.go:83] releasing machines lock for "force-systemd-flag-300000", held for 2.360097292s
	W0913 17:10:52.286374    5000 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-300000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-300000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:52.300934    5000 out.go:201] 
	W0913 17:10:52.313326    5000 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:10:52.313360    5000 out.go:270] * 
	* 
	W0913 17:10:52.315360    5000 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:10:52.329033    5000 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-300000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-300000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-300000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.849375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-300000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-300000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-300000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-09-13 17:10:52.425084 -0700 PDT m=+2719.928206168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-300000 -n force-systemd-flag-300000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-300000 -n force-systemd-flag-300000: exit status 7 (34.4335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-300000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-300000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-300000
--- FAIL: TestForceSystemdFlag (10.11s)

                                                
                                    
x
+
TestForceSystemdEnv (11.61s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-453000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-453000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.412676125s)

                                                
                                                
-- stdout --
	* [force-systemd-env-453000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-453000" primary control-plane node in "force-systemd-env-453000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-453000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:10:35.835785    4968 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:10:35.835921    4968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:10:35.835927    4968 out.go:358] Setting ErrFile to fd 2...
	I0913 17:10:35.835929    4968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:10:35.836063    4968 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:10:35.837140    4968 out.go:352] Setting JSON to false
	I0913 17:10:35.853255    4968 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4199,"bootTime":1726268436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:10:35.853336    4968 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:10:35.860791    4968 out.go:177] * [force-systemd-env-453000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:10:35.868736    4968 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:10:35.868774    4968 notify.go:220] Checking for updates...
	I0913 17:10:35.876700    4968 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:10:35.879665    4968 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:10:35.882717    4968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:10:35.885738    4968 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:10:35.888672    4968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0913 17:10:35.891958    4968 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:10:35.892016    4968 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:10:35.896734    4968 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:10:35.903685    4968 start.go:297] selected driver: qemu2
	I0913 17:10:35.903691    4968 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:10:35.903705    4968 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:10:35.906059    4968 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:10:35.909717    4968 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:10:35.912782    4968 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 17:10:35.912800    4968 cni.go:84] Creating CNI manager for ""
	I0913 17:10:35.912821    4968 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:10:35.912825    4968 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:10:35.912852    4968 start.go:340] cluster config:
	{Name:force-systemd-env-453000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-453000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:10:35.916656    4968 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:10:35.924740    4968 out.go:177] * Starting "force-systemd-env-453000" primary control-plane node in "force-systemd-env-453000" cluster
	I0913 17:10:35.928673    4968 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:10:35.928693    4968 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:10:35.928710    4968 cache.go:56] Caching tarball of preloaded images
	I0913 17:10:35.928799    4968 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:10:35.928805    4968 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:10:35.928879    4968 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/force-systemd-env-453000/config.json ...
	I0913 17:10:35.928891    4968 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/force-systemd-env-453000/config.json: {Name:mka7d01daef9b9a5164675b72d43a696b8343eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:10:35.929126    4968 start.go:360] acquireMachinesLock for force-systemd-env-453000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:10:35.929166    4968 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "force-systemd-env-453000"
	I0913 17:10:35.929178    4968 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-453000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-453000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:10:35.929209    4968 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:10:35.936697    4968 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 17:10:35.954933    4968 start.go:159] libmachine.API.Create for "force-systemd-env-453000" (driver="qemu2")
	I0913 17:10:35.954962    4968 client.go:168] LocalClient.Create starting
	I0913 17:10:35.955027    4968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:10:35.955059    4968 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:35.955070    4968 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:35.955108    4968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:10:35.955137    4968 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:35.955147    4968 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:35.955471    4968 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:10:36.117097    4968 main.go:141] libmachine: Creating SSH key...
	I0913 17:10:36.156613    4968 main.go:141] libmachine: Creating Disk image...
	I0913 17:10:36.156618    4968 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:10:36.156792    4968 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2
	I0913 17:10:36.165855    4968 main.go:141] libmachine: STDOUT: 
	I0913 17:10:36.165872    4968 main.go:141] libmachine: STDERR: 
	I0913 17:10:36.165927    4968 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2 +20000M
	I0913 17:10:36.173786    4968 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:10:36.173801    4968 main.go:141] libmachine: STDERR: 
	I0913 17:10:36.173818    4968 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2
	I0913 17:10:36.173823    4968 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:10:36.173837    4968 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:10:36.173870    4968 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:6c:58:5f:bd:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2
	I0913 17:10:36.175537    4968 main.go:141] libmachine: STDOUT: 
	I0913 17:10:36.175551    4968 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:10:36.175570    4968 client.go:171] duration metric: took 220.602834ms to LocalClient.Create
	I0913 17:10:38.177619    4968 start.go:128] duration metric: took 2.248436416s to createHost
	I0913 17:10:38.177635    4968 start.go:83] releasing machines lock for "force-systemd-env-453000", held for 2.2484975s
	W0913 17:10:38.177646    4968 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:38.187106    4968 out.go:177] * Deleting "force-systemd-env-453000" in qemu2 ...
	W0913 17:10:38.198192    4968 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:38.198206    4968 start.go:729] Will try again in 5 seconds ...
	I0913 17:10:43.200359    4968 start.go:360] acquireMachinesLock for force-systemd-env-453000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:10:44.807004    4968 start.go:364] duration metric: took 1.606538125s to acquireMachinesLock for "force-systemd-env-453000"
	I0913 17:10:44.807174    4968 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-453000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-453000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:10:44.807406    4968 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:10:44.823094    4968 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0913 17:10:44.876109    4968 start.go:159] libmachine.API.Create for "force-systemd-env-453000" (driver="qemu2")
	I0913 17:10:44.876159    4968 client.go:168] LocalClient.Create starting
	I0913 17:10:44.876298    4968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:10:44.876363    4968 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:44.876380    4968 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:44.876460    4968 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:10:44.876503    4968 main.go:141] libmachine: Decoding PEM data...
	I0913 17:10:44.876518    4968 main.go:141] libmachine: Parsing certificate...
	I0913 17:10:44.877021    4968 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:10:45.049995    4968 main.go:141] libmachine: Creating SSH key...
	I0913 17:10:45.141665    4968 main.go:141] libmachine: Creating Disk image...
	I0913 17:10:45.141673    4968 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:10:45.141845    4968 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2
	I0913 17:10:45.151202    4968 main.go:141] libmachine: STDOUT: 
	I0913 17:10:45.151215    4968 main.go:141] libmachine: STDERR: 
	I0913 17:10:45.151281    4968 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2 +20000M
	I0913 17:10:45.159116    4968 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:10:45.159137    4968 main.go:141] libmachine: STDERR: 
	I0913 17:10:45.159150    4968 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2
	I0913 17:10:45.159155    4968 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:10:45.159162    4968 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:10:45.159194    4968 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:4b:35:dd:e3:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/force-systemd-env-453000/disk.qcow2
	I0913 17:10:45.160869    4968 main.go:141] libmachine: STDOUT: 
	I0913 17:10:45.160884    4968 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:10:45.160906    4968 client.go:171] duration metric: took 284.745958ms to LocalClient.Create
	I0913 17:10:47.163167    4968 start.go:128] duration metric: took 2.355720792s to createHost
	I0913 17:10:47.163232    4968 start.go:83] releasing machines lock for "force-systemd-env-453000", held for 2.3562075s
	W0913 17:10:47.163574    4968 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-453000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-453000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:47.182397    4968 out.go:201] 
	W0913 17:10:47.191025    4968 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:10:47.191055    4968 out.go:270] * 
	* 
	W0913 17:10:47.193191    4968 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:10:47.204130    4968 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-453000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-453000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-453000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.044917ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-453000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-453000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-453000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-09-13 17:10:47.301331 -0700 PDT m=+2714.804376418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-453000 -n force-systemd-env-453000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-453000 -n force-systemd-env-453000: exit status 7 (34.375708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-453000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-453000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-453000
--- FAIL: TestForceSystemdEnv (11.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-830000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-830000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-dhk7r" [8dfba7b2-7516-4ce4-8095-2095003437c3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-dhk7r" [8dfba7b2-7516-4ce4-8095-2095003437c3] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.007398875s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32025
functional_test.go:1661: error fetching http://192.168.105.4:32025: Get "http://192.168.105.4:32025": dial tcp 192.168.105.4:32025: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32025: Get "http://192.168.105.4:32025": dial tcp 192.168.105.4:32025: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32025: Get "http://192.168.105.4:32025": dial tcp 192.168.105.4:32025: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32025: Get "http://192.168.105.4:32025": dial tcp 192.168.105.4:32025: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32025: Get "http://192.168.105.4:32025": dial tcp 192.168.105.4:32025: connect: connection refused
E0913 16:44:18.661730    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:44:18.670635    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:44:18.684060    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:44:18.707501    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:44:18.749436    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:44:18.832827    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:32025: Get "http://192.168.105.4:32025": dial tcp 192.168.105.4:32025: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32025: Get "http://192.168.105.4:32025": dial tcp 192.168.105.4:32025: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32025: Get "http://192.168.105.4:32025": dial tcp 192.168.105.4:32025: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-830000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-dhk7r
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-830000/192.168.105.4
Start Time:       Fri, 13 Sep 2024 16:43:57 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://26b5319979e737e32b72bd7df38bbc99b346cf4373aa600487783506280c2560
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 13 Sep 2024 16:44:13 -0700
Finished:     Fri, 13 Sep 2024 16:44:13 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ww2cm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ww2cm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  35s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-dhk7r to functional-830000
Normal   Pulled     19s (x3 over 34s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    19s (x3 over 34s)  kubelet            Created container echoserver-arm
Normal   Started    19s (x3 over 34s)  kubelet            Started container echoserver-arm
Warning  BackOff    6s (x4 over 32s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-dhk7r_default(8dfba7b2-7516-4ce4-8095-2095003437c3)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-830000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-830000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.122.68
IPs:                      10.96.122.68
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32025/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-830000 -n functional-830000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-830000 ssh findmnt                                                                                        | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh findmnt                                                                                        | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh -- ls                                                                                          | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh cat                                                                                            | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | /mount-9p/test-1726271059614073000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh stat                                                                                           | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh stat                                                                                           | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh sudo                                                                                           | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh findmnt                                                                                        | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-830000                                                                                                 | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2185640952/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh findmnt                                                                                        | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh findmnt                                                                                        | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh -- ls                                                                                          | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh sudo                                                                                           | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-830000                                                                                                 | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2781637999/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-830000                                                                                                 | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2781637999/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-830000                                                                                                 | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2781637999/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh findmnt                                                                                        | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh findmnt                                                                                        | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh findmnt                                                                                        | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-830000 ssh findmnt                                                                                        | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT | 13 Sep 24 16:44 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-830000                                                                                                 | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-830000                                                                                                 | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-830000 --dry-run                                                                                       | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-830000                                                                                                 | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-830000 | jenkins | v1.34.0 | 13 Sep 24 16:44 PDT |                     |
	|           | -p functional-830000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 16:44:27
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 16:44:27.519394    3172 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:44:27.519526    3172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:44:27.519530    3172 out.go:358] Setting ErrFile to fd 2...
	I0913 16:44:27.519533    3172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:44:27.519664    3172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 16:44:27.521028    3172 out.go:352] Setting JSON to false
	I0913 16:44:27.538851    3172 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2631,"bootTime":1726268436,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 16:44:27.538931    3172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 16:44:27.544175    3172 out.go:177] * [functional-830000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0913 16:44:27.551122    3172 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 16:44:27.551167    3172 notify.go:220] Checking for updates...
	I0913 16:44:27.558111    3172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 16:44:27.561135    3172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 16:44:27.564121    3172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 16:44:27.567074    3172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 16:44:27.570110    3172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 16:44:27.573310    3172 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 16:44:27.573554    3172 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 16:44:27.578056    3172 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0913 16:44:27.585064    3172 start.go:297] selected driver: qemu2
	I0913 16:44:27.585069    3172 start.go:901] validating driver "qemu2" against &{Name:functional-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-830000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 16:44:27.585119    3172 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 16:44:27.591142    3172 out.go:201] 
	W0913 16:44:27.594002    3172 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 16:44:27.598122    3172 out.go:201] 
	
	
	==> Docker <==
	Sep 13 23:44:23 functional-830000 dockerd[5917]: time="2024-09-13T23:44:23.833900898Z" level=info msg="ignoring event" container=5bf9243bf564cffd52a00decacd77ff61acbcc0eb1e738718c3fe0a95629eb7f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:44:23 functional-830000 dockerd[5923]: time="2024-09-13T23:44:23.834194027Z" level=info msg="shim disconnected" id=5bf9243bf564cffd52a00decacd77ff61acbcc0eb1e738718c3fe0a95629eb7f namespace=moby
	Sep 13 23:44:23 functional-830000 dockerd[5923]: time="2024-09-13T23:44:23.834231720Z" level=warning msg="cleaning up after shim disconnected" id=5bf9243bf564cffd52a00decacd77ff61acbcc0eb1e738718c3fe0a95629eb7f namespace=moby
	Sep 13 23:44:23 functional-830000 dockerd[5923]: time="2024-09-13T23:44:23.834237176Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 23:44:23 functional-830000 dockerd[5923]: time="2024-09-13T23:44:23.838792754Z" level=warning msg="cleanup warnings time=\"2024-09-13T23:44:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Sep 13 23:44:28 functional-830000 dockerd[5923]: time="2024-09-13T23:44:28.389830569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 13 23:44:28 functional-830000 dockerd[5923]: time="2024-09-13T23:44:28.389862348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 13 23:44:28 functional-830000 dockerd[5923]: time="2024-09-13T23:44:28.389870969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 23:44:28 functional-830000 dockerd[5923]: time="2024-09-13T23:44:28.391762897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 23:44:28 functional-830000 dockerd[5923]: time="2024-09-13T23:44:28.392472396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 13 23:44:28 functional-830000 dockerd[5923]: time="2024-09-13T23:44:28.392489764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 13 23:44:28 functional-830000 dockerd[5923]: time="2024-09-13T23:44:28.392494720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 23:44:28 functional-830000 dockerd[5923]: time="2024-09-13T23:44:28.392515837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 23:44:28 functional-830000 cri-dockerd[6253]: time="2024-09-13T23:44:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c5811ae4216ca70cca2ef89a0fd3ab113829c95cb2df0e25db244d54da086a85/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 13 23:44:28 functional-830000 cri-dockerd[6253]: time="2024-09-13T23:44:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/10cd7cdf30f57fac07ede0f54a9f20496af98aa60dbbf6ef961d050fb9fa5bc3/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 13 23:44:28 functional-830000 dockerd[5917]: time="2024-09-13T23:44:28.694190438Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 13 23:44:30 functional-830000 dockerd[5923]: time="2024-09-13T23:44:30.903383705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 13 23:44:30 functional-830000 dockerd[5923]: time="2024-09-13T23:44:30.903462173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 13 23:44:30 functional-830000 dockerd[5923]: time="2024-09-13T23:44:30.903472711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 23:44:30 functional-830000 dockerd[5923]: time="2024-09-13T23:44:30.903513402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 13 23:44:30 functional-830000 dockerd[5917]: time="2024-09-13T23:44:30.938254299Z" level=info msg="ignoring event" container=a96e9e501a07f541828d9e5a17fa07f451f3042c33e8687a3211389e89538228 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:44:30 functional-830000 dockerd[5923]: time="2024-09-13T23:44:30.938367377Z" level=info msg="shim disconnected" id=a96e9e501a07f541828d9e5a17fa07f451f3042c33e8687a3211389e89538228 namespace=moby
	Sep 13 23:44:30 functional-830000 dockerd[5923]: time="2024-09-13T23:44:30.938394033Z" level=warning msg="cleaning up after shim disconnected" id=a96e9e501a07f541828d9e5a17fa07f451f3042c33e8687a3211389e89538228 namespace=moby
	Sep 13 23:44:30 functional-830000 dockerd[5923]: time="2024-09-13T23:44:30.938397782Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 13 23:44:30 functional-830000 dockerd[5923]: time="2024-09-13T23:44:30.947697317Z" level=warning msg="cleanup warnings time=\"2024-09-13T23:44:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a96e9e501a07f       72565bf5bbedf                                                                                         2 seconds ago        Exited              echoserver-arm            3                   9c0d21a6d11dc       hello-node-64b4f8f9ff-b6q6c
	801a3e9fd77bc       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 seconds ago       Exited              mount-munger              0                   5bf9243bf564c       busybox-mount
	26b5319979e73       72565bf5bbedf                                                                                         19 seconds ago       Exited              echoserver-arm            2                   6f3506286fc64       hello-node-connect-65d86f57f4-dhk7r
	a6988ab2afd45       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                         19 seconds ago       Running             myfrontend                0                   3efeba8d4e314       sp-pod
	6be538929a7a4       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                         41 seconds ago       Running             nginx                     0                   555dee44a3249       nginx-svc
	2304baa65444f       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   a0e59f47d7aff       coredns-7c65d6cfc9-hlwtf
	7b128501835d4       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   d6b7ed092baa9       storage-provisioner
	4cdd2b83d7c83       24a140c548c07                                                                                         About a minute ago   Running             kube-proxy                2                   4899c4cba7d7d       kube-proxy-hfbrd
	09ef9178fa26e       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   4355087d474a1       etcd-functional-830000
	d3c0f8114c5a2       7f8aa378bb47d                                                                                         About a minute ago   Running             kube-scheduler            2                   6c9a501fc95eb       kube-scheduler-functional-830000
	dc5229bf855ce       279f381cb3736                                                                                         About a minute ago   Running             kube-controller-manager   2                   4d18d1bff01e2       kube-controller-manager-functional-830000
	e8e3e25913cb2       d3f53a98c0a9d                                                                                         About a minute ago   Running             kube-apiserver            0                   8865d3a873719       kube-apiserver-functional-830000
	3fb66eec21ad2       2f6c962e7b831                                                                                         About a minute ago   Exited              coredns                   1                   d6e2bec4057f2       coredns-7c65d6cfc9-hlwtf
	c89d7d6175db9       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       1                   e09bcb823b92e       storage-provisioner
	a32e86ab88446       24a140c548c07                                                                                         About a minute ago   Exited              kube-proxy                1                   c5b9ed77c0871       kube-proxy-hfbrd
	7b7e2dbb2704c       27e3830e14027                                                                                         About a minute ago   Exited              etcd                      1                   5ebe627d34b24       etcd-functional-830000
	9c49556f8c342       279f381cb3736                                                                                         About a minute ago   Exited              kube-controller-manager   1                   1b8f2b0cd32e3       kube-controller-manager-functional-830000
	1c8ad52c53b0c       7f8aa378bb47d                                                                                         About a minute ago   Exited              kube-scheduler            1                   6bf3c5e1cb8c9       kube-scheduler-functional-830000
	
	
	==> coredns [2304baa65444] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46032 - 24815 "HINFO IN 6116207618477008684.5935214009258711857. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.005309089s
	[INFO] 10.244.0.1:37959 - 64481 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000105037s
	[INFO] 10.244.0.1:44288 - 64670 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000096541s
	[INFO] 10.244.0.1:34969 - 37402 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00093238s
	[INFO] 10.244.0.1:52273 - 1410 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000101288s
	[INFO] 10.244.0.1:54656 - 58195 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000060848s
	[INFO] 10.244.0.1:44159 - 8099 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000035276s
	
	
	==> coredns [3fb66eec21ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58036 - 37870 "HINFO IN 7914415152877766257.7097604474644662652. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004758284s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-830000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-830000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=functional-830000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T16_42_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:42:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-830000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:44:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:44:21 +0000   Fri, 13 Sep 2024 23:42:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:44:21 +0000   Fri, 13 Sep 2024 23:42:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:44:21 +0000   Fri, 13 Sep 2024 23:42:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:44:21 +0000   Fri, 13 Sep 2024 23:42:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-830000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 a210de1980174a228b676e3a32b26b5b
	  System UUID:                a210de1980174a228b676e3a32b26b5b
	  Boot ID:                    e10e5a1d-6bc7-4065-9157-952fad3a17c9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-b6q6c                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  default                     hello-node-connect-65d86f57f4-dhk7r          0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 coredns-7c65d6cfc9-hlwtf                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m24s
	  kube-system                 etcd-functional-830000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m29s
	  kube-system                 kube-apiserver-functional-830000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-functional-830000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-hfbrd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-scheduler-functional-830000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-vhgql    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-mfwrn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m23s                  kube-proxy       
	  Normal  Starting                 71s                    kube-proxy       
	  Normal  Starting                 114s                   kube-proxy       
	  Normal  Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node functional-830000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node functional-830000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m33s (x7 over 2m33s)  kubelet          Node functional-830000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m29s                  kubelet          Node functional-830000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m29s                  kubelet          Node functional-830000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m29s                  kubelet          Node functional-830000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           2m25s                  node-controller  Node functional-830000 event: Registered Node functional-830000 in Controller
	  Normal  NodeReady                2m25s                  kubelet          Node functional-830000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  118s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)    kubelet          Node functional-830000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)    kubelet          Node functional-830000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)    kubelet          Node functional-830000 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node functional-830000 event: Registered Node functional-830000 in Controller
	  Normal  Starting                 75s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s (x8 over 75s)      kubelet          Node functional-830000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s (x8 over 75s)      kubelet          Node functional-830000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s (x7 over 75s)      kubelet          Node functional-830000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                    node-controller  Node functional-830000 event: Registered Node functional-830000 in Controller
	
	
	==> dmesg <==
	[  +3.464439] kauditd_printk_skb: 201 callbacks suppressed
	[ +13.097727] systemd-fstab-generator[4993]: Ignoring "noauto" option for root device
	[  +0.055673] kauditd_printk_skb: 33 callbacks suppressed
	[Sep13 23:43] systemd-fstab-generator[5436]: Ignoring "noauto" option for root device
	[  +0.052031] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.094779] systemd-fstab-generator[5469]: Ignoring "noauto" option for root device
	[  +0.108025] systemd-fstab-generator[5481]: Ignoring "noauto" option for root device
	[  +0.094147] systemd-fstab-generator[5495]: Ignoring "noauto" option for root device
	[  +5.117116] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.383598] systemd-fstab-generator[6129]: Ignoring "noauto" option for root device
	[  +0.093032] systemd-fstab-generator[6141]: Ignoring "noauto" option for root device
	[  +0.070459] systemd-fstab-generator[6153]: Ignoring "noauto" option for root device
	[  +0.088948] systemd-fstab-generator[6233]: Ignoring "noauto" option for root device
	[  +0.212012] systemd-fstab-generator[6409]: Ignoring "noauto" option for root device
	[  +0.699822] systemd-fstab-generator[6533]: Ignoring "noauto" option for root device
	[  +3.403578] kauditd_printk_skb: 199 callbacks suppressed
	[ +11.736238] systemd-fstab-generator[7533]: Ignoring "noauto" option for root device
	[  +0.055000] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.024734] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.251641] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.475430] kauditd_printk_skb: 20 callbacks suppressed
	[  +9.334348] kauditd_printk_skb: 13 callbacks suppressed
	[Sep13 23:44] kauditd_printk_skb: 38 callbacks suppressed
	[ +15.429727] kauditd_printk_skb: 21 callbacks suppressed
	[  +7.317689] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [09ef9178fa26] <==
	{"level":"info","ts":"2024-09-13T23:43:18.761974Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-09-13T23:43:18.762053Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:43:18.762141Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:43:18.764470Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:43:18.767524Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-13T23:43:18.769587Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-13T23:43:18.769619Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-13T23:43:18.770306Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-13T23:43:18.770346Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-13T23:43:19.760641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-13T23:43:19.760695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-13T23:43:19.760722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-13T23:43:19.760897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-09-13T23:43:19.760912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-13T23:43:19.760922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-09-13T23:43:19.760929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-09-13T23:43:19.763256Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-830000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T23:43:19.763339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:43:19.763607Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:43:19.764248Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:43:19.764904Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T23:43:19.765627Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:43:19.766459Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-13T23:43:19.779276Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T23:43:19.779293Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [7b7e2dbb2704] <==
	{"level":"info","ts":"2024-09-13T23:42:36.764274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-13T23:42:36.764416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-09-13T23:42:36.764489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-09-13T23:42:36.764527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-13T23:42:36.764634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-09-13T23:42:36.764681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-09-13T23:42:36.770834Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-830000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T23:42:36.771126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:42:36.772226Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:42:36.772591Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T23:42:36.772671Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T23:42:36.772813Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:42:36.773836Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:42:36.774562Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-09-13T23:42:36.775491Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T23:43:04.050899Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-13T23:43:04.050931Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-830000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-09-13T23:43:04.050972Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T23:43:04.051011Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T23:43:04.063474Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-13T23:43:04.063499Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-13T23:43:04.063522Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-09-13T23:43:04.064989Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-13T23:43:04.065025Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-09-13T23:43:04.065030Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-830000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 23:44:32 up 2 min,  0 users,  load average: 1.08, 0.49, 0.19
	Linux functional-830000 5.10.207 #1 SMP PREEMPT Fri Sep 13 18:07:06 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e8e3e25913cb] <==
	I0913 23:43:20.322904       1 policy_source.go:224] refreshing policies
	E0913 23:43:20.323658       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0913 23:43:20.323989       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0913 23:43:20.334712       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0913 23:43:20.334740       1 aggregator.go:171] initial CRD sync complete...
	I0913 23:43:20.334754       1 autoregister_controller.go:144] Starting autoregister controller
	I0913 23:43:20.334767       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0913 23:43:20.334777       1 cache.go:39] Caches are synced for autoregister controller
	I0913 23:43:20.376542       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0913 23:43:21.222568       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0913 23:43:21.860465       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0913 23:43:21.865216       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0913 23:43:21.875455       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0913 23:43:21.882489       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0913 23:43:21.884360       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0913 23:43:23.857053       1 controller.go:615] quota admission added evaluator for: endpoints
	I0913 23:43:24.008202       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0913 23:43:37.459870       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.50.116"}
	I0913 23:43:42.402347       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0913 23:43:42.444323       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.156.221"}
	I0913 23:43:47.438653       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.31.252"}
	I0913 23:43:57.881210       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.122.68"}
	I0913 23:44:27.976026       1 controller.go:615] quota admission added evaluator for: namespaces
	I0913 23:44:28.067495       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.155.21"}
	I0913 23:44:28.078995       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.135.160"}
	
	
	==> kube-controller-manager [9c49556f8c34] <==
	I0913 23:42:40.630672       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0913 23:42:40.630683       1 shared_informer.go:320] Caches are synced for job
	I0913 23:42:40.642709       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0913 23:42:40.649993       1 shared_informer.go:320] Caches are synced for ephemeral
	I0913 23:42:40.650021       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0913 23:42:40.650970       1 shared_informer.go:320] Caches are synced for cronjob
	I0913 23:42:40.652481       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0913 23:42:40.652538       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0913 23:42:40.741197       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0913 23:42:40.749471       1 shared_informer.go:320] Caches are synced for attach detach
	I0913 23:42:40.816654       1 shared_informer.go:320] Caches are synced for stateful set
	I0913 23:42:40.817791       1 shared_informer.go:320] Caches are synced for taint
	I0913 23:42:40.817854       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0913 23:42:40.817910       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-830000"
	I0913 23:42:40.817936       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0913 23:42:40.850123       1 shared_informer.go:320] Caches are synced for disruption
	I0913 23:42:40.851181       1 shared_informer.go:320] Caches are synced for daemon sets
	I0913 23:42:40.851593       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 23:42:40.857798       1 shared_informer.go:320] Caches are synced for resource quota
	I0913 23:42:40.955323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="339.798646ms"
	I0913 23:42:40.959103       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.756208ms"
	I0913 23:42:40.959323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="196.15µs"
	I0913 23:42:41.265857       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 23:42:41.323397       1 shared_informer.go:320] Caches are synced for garbage collector
	I0913 23:42:41.323447       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [dc5229bf855c] <==
	I0913 23:44:01.360453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="31.653µs"
	I0913 23:44:03.379225       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.448µs"
	I0913 23:44:14.594990       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="39.026µs"
	I0913 23:44:16.749314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="415.949µs"
	I0913 23:44:21.944359       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-830000"
	I0913 23:44:26.738870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.489µs"
	I0913 23:44:28.022156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.224532ms"
	E0913 23:44:28.022306       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 23:44:28.029480       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.114436ms"
	E0913 23:44:28.029502       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 23:44:28.032501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="14.275153ms"
	E0913 23:44:28.032518       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 23:44:28.035174       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.126839ms"
	E0913 23:44:28.035191       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 23:44:28.036324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.104674ms"
	E0913 23:44:28.036337       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0913 23:44:28.046409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.362584ms"
	I0913 23:44:28.055896       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.646558ms"
	I0913 23:44:28.056291       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.953µs"
	I0913 23:44:28.056707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.192024ms"
	I0913 23:44:28.061549       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="15.452µs"
	I0913 23:44:28.063223       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.467332ms"
	I0913 23:44:28.063661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="13.286µs"
	I0913 23:44:28.066421       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="17.493µs"
	I0913 23:44:31.827049       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.366µs"
	
	
	==> kube-proxy [4cdd2b83d7c8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 23:43:21.262178       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 23:43:21.265338       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0913 23:43:21.265373       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:43:21.277278       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 23:43:21.277401       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 23:43:21.277438       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:43:21.279352       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:43:21.279567       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:43:21.279673       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:43:21.281101       1 config.go:199] "Starting service config controller"
	I0913 23:43:21.281116       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:43:21.281239       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:43:21.281245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:43:21.281456       1 config.go:328] "Starting node config controller"
	I0913 23:43:21.281463       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:43:21.381947       1 shared_informer.go:320] Caches are synced for node config
	I0913 23:43:21.381970       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 23:43:21.381959       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [a32e86ab8844] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0913 23:42:38.256589       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0913 23:42:38.259807       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0913 23:42:38.259843       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:42:38.267049       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0913 23:42:38.267065       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0913 23:42:38.267075       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:42:38.267730       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:42:38.267847       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:42:38.267856       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:42:38.268354       1 config.go:199] "Starting service config controller"
	I0913 23:42:38.268368       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:42:38.268378       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:42:38.268389       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:42:38.268586       1 config.go:328] "Starting node config controller"
	I0913 23:42:38.268600       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:42:38.369581       1 shared_informer.go:320] Caches are synced for node config
	I0913 23:42:38.369573       1 shared_informer.go:320] Caches are synced for service config
	I0913 23:42:38.369604       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1c8ad52c53b0] <==
	I0913 23:42:35.796294       1 serving.go:386] Generated self-signed cert in-memory
	W0913 23:42:37.317623       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 23:42:37.317729       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 23:42:37.317769       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 23:42:37.317785       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 23:42:37.329544       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 23:42:37.329588       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:42:37.330499       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 23:42:37.330550       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 23:42:37.330563       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 23:42:37.330659       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 23:42:37.433067       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0913 23:43:04.059724       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d3c0f8114c5a] <==
	I0913 23:43:19.412179       1 serving.go:386] Generated self-signed cert in-memory
	W0913 23:43:20.249912       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0913 23:43:20.249997       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0913 23:43:20.250026       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0913 23:43:20.250033       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0913 23:43:20.279208       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 23:43:20.279227       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:43:20.280390       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 23:43:20.280426       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 23:43:20.280430       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 23:43:20.280511       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 23:43:20.380794       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 13 23:44:17 functional-830000 kubelet[6540]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 13 23:44:17 functional-830000 kubelet[6540]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 13 23:44:17 functional-830000 kubelet[6540]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 13 23:44:17 functional-830000 kubelet[6540]: I0913 23:44:17.813777    6540 scope.go:117] "RemoveContainer" containerID="f9d3a4f060dacf0ec3a0c096f2dea8f101ad4d2447d43bb06d92c1d39653ec43"
	Sep 13 23:44:20 functional-830000 kubelet[6540]: I0913 23:44:20.508590    6540 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/bf94b7b9-d453-4c9e-b6b2-f2087655e962-test-volume\") pod \"busybox-mount\" (UID: \"bf94b7b9-d453-4c9e-b6b2-f2087655e962\") " pod="default/busybox-mount"
	Sep 13 23:44:20 functional-830000 kubelet[6540]: I0913 23:44:20.508623    6540 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4znm4\" (UniqueName: \"kubernetes.io/projected/bf94b7b9-d453-4c9e-b6b2-f2087655e962-kube-api-access-4znm4\") pod \"busybox-mount\" (UID: \"bf94b7b9-d453-4c9e-b6b2-f2087655e962\") " pod="default/busybox-mount"
	Sep 13 23:44:23 functional-830000 kubelet[6540]: I0913 23:44:23.940063    6540 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/bf94b7b9-d453-4c9e-b6b2-f2087655e962-test-volume\") pod \"bf94b7b9-d453-4c9e-b6b2-f2087655e962\" (UID: \"bf94b7b9-d453-4c9e-b6b2-f2087655e962\") "
	Sep 13 23:44:23 functional-830000 kubelet[6540]: I0913 23:44:23.940101    6540 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4znm4\" (UniqueName: \"kubernetes.io/projected/bf94b7b9-d453-4c9e-b6b2-f2087655e962-kube-api-access-4znm4\") pod \"bf94b7b9-d453-4c9e-b6b2-f2087655e962\" (UID: \"bf94b7b9-d453-4c9e-b6b2-f2087655e962\") "
	Sep 13 23:44:23 functional-830000 kubelet[6540]: I0913 23:44:23.940320    6540 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf94b7b9-d453-4c9e-b6b2-f2087655e962-test-volume" (OuterVolumeSpecName: "test-volume") pod "bf94b7b9-d453-4c9e-b6b2-f2087655e962" (UID: "bf94b7b9-d453-4c9e-b6b2-f2087655e962"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 23:44:23 functional-830000 kubelet[6540]: I0913 23:44:23.943843    6540 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf94b7b9-d453-4c9e-b6b2-f2087655e962-kube-api-access-4znm4" (OuterVolumeSpecName: "kube-api-access-4znm4") pod "bf94b7b9-d453-4c9e-b6b2-f2087655e962" (UID: "bf94b7b9-d453-4c9e-b6b2-f2087655e962"). InnerVolumeSpecName "kube-api-access-4znm4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:44:24 functional-830000 kubelet[6540]: I0913 23:44:24.040815    6540 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/bf94b7b9-d453-4c9e-b6b2-f2087655e962-test-volume\") on node \"functional-830000\" DevicePath \"\""
	Sep 13 23:44:24 functional-830000 kubelet[6540]: I0913 23:44:24.040845    6540 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4znm4\" (UniqueName: \"kubernetes.io/projected/bf94b7b9-d453-4c9e-b6b2-f2087655e962-kube-api-access-4znm4\") on node \"functional-830000\" DevicePath \"\""
	Sep 13 23:44:24 functional-830000 kubelet[6540]: I0913 23:44:24.758878    6540 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf9243bf564cffd52a00decacd77ff61acbcc0eb1e738718c3fe0a95629eb7f"
	Sep 13 23:44:26 functional-830000 kubelet[6540]: I0913 23:44:26.734430    6540 scope.go:117] "RemoveContainer" containerID="26b5319979e737e32b72bd7df38bbc99b346cf4373aa600487783506280c2560"
	Sep 13 23:44:26 functional-830000 kubelet[6540]: E0913 23:44:26.734499    6540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-dhk7r_default(8dfba7b2-7516-4ce4-8095-2095003437c3)\"" pod="default/hello-node-connect-65d86f57f4-dhk7r" podUID="8dfba7b2-7516-4ce4-8095-2095003437c3"
	Sep 13 23:44:28 functional-830000 kubelet[6540]: E0913 23:44:28.047840    6540 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf94b7b9-d453-4c9e-b6b2-f2087655e962" containerName="mount-munger"
	Sep 13 23:44:28 functional-830000 kubelet[6540]: I0913 23:44:28.047867    6540 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf94b7b9-d453-4c9e-b6b2-f2087655e962" containerName="mount-munger"
	Sep 13 23:44:28 functional-830000 kubelet[6540]: I0913 23:44:28.186155    6540 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzmtn\" (UniqueName: \"kubernetes.io/projected/dcbd1a10-e418-41b0-9a78-30f1993ef3cf-kube-api-access-pzmtn\") pod \"dashboard-metrics-scraper-c5db448b4-vhgql\" (UID: \"dcbd1a10-e418-41b0-9a78-30f1993ef3cf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-vhgql"
	Sep 13 23:44:28 functional-830000 kubelet[6540]: I0913 23:44:28.186184    6540 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lzk\" (UniqueName: \"kubernetes.io/projected/a207dbae-80ce-4280-8bc9-326387473b2c-kube-api-access-w9lzk\") pod \"kubernetes-dashboard-695b96c756-mfwrn\" (UID: \"a207dbae-80ce-4280-8bc9-326387473b2c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-mfwrn"
	Sep 13 23:44:28 functional-830000 kubelet[6540]: I0913 23:44:28.186195    6540 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a207dbae-80ce-4280-8bc9-326387473b2c-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-mfwrn\" (UID: \"a207dbae-80ce-4280-8bc9-326387473b2c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-mfwrn"
	Sep 13 23:44:28 functional-830000 kubelet[6540]: I0913 23:44:28.186204    6540 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/dcbd1a10-e418-41b0-9a78-30f1993ef3cf-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-vhgql\" (UID: \"dcbd1a10-e418-41b0-9a78-30f1993ef3cf\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-vhgql"
	Sep 13 23:44:30 functional-830000 kubelet[6540]: I0913 23:44:30.732612    6540 scope.go:117] "RemoveContainer" containerID="113d60e6a15c7eba7cc2f156a2c6929c9a754c0efe38aca51374450e1160ab95"
	Sep 13 23:44:31 functional-830000 kubelet[6540]: I0913 23:44:31.819047    6540 scope.go:117] "RemoveContainer" containerID="113d60e6a15c7eba7cc2f156a2c6929c9a754c0efe38aca51374450e1160ab95"
	Sep 13 23:44:31 functional-830000 kubelet[6540]: I0913 23:44:31.819191    6540 scope.go:117] "RemoveContainer" containerID="a96e9e501a07f541828d9e5a17fa07f451f3042c33e8687a3211389e89538228"
	Sep 13 23:44:31 functional-830000 kubelet[6540]: E0913 23:44:31.819252    6540 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-b6q6c_default(2c3dedab-f384-46e1-84be-9ecde2a92fb0)\"" pod="default/hello-node-64b4f8f9ff-b6q6c" podUID="2c3dedab-f384-46e1-84be-9ecde2a92fb0"
	
	
	==> storage-provisioner [7b128501835d] <==
	I0913 23:43:21.196328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 23:43:21.200643       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 23:43:21.200951       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 23:43:38.617333       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 23:43:38.618433       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-830000_8f1a56dd-7512-47ca-ac24-7b9d4b80f398!
	I0913 23:43:38.619374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"39baf3a9-7e24-459b-a151-7ef9ea60c0cd", APIVersion:"v1", ResourceVersion:"649", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-830000_8f1a56dd-7512-47ca-ac24-7b9d4b80f398 became leader
	I0913 23:43:38.720662       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-830000_8f1a56dd-7512-47ca-ac24-7b9d4b80f398!
	I0913 23:44:00.065739       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0913 23:44:00.065889       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    fbfacd6f-e1a9-4ef1-82c3-506dbba504e1 377 0 2024-09-13 23:42:09 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-13 23:42:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-22b3df5f-82da-4f91-8146-f5e8931e9df2 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  22b3df5f-82da-4f91-8146-f5e8931e9df2 765 0 2024-09-13 23:44:00 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-13 23:44:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-13 23:44:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0913 23:44:00.066336       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-22b3df5f-82da-4f91-8146-f5e8931e9df2" provisioned
	I0913 23:44:00.066412       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0913 23:44:00.066445       1 volume_store.go:212] Trying to save persistentvolume "pvc-22b3df5f-82da-4f91-8146-f5e8931e9df2"
	I0913 23:44:00.066572       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"22b3df5f-82da-4f91-8146-f5e8931e9df2", APIVersion:"v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0913 23:44:00.071607       1 volume_store.go:219] persistentvolume "pvc-22b3df5f-82da-4f91-8146-f5e8931e9df2" saved
	I0913 23:44:00.071701       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"22b3df5f-82da-4f91-8146-f5e8931e9df2", APIVersion:"v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-22b3df5f-82da-4f91-8146-f5e8931e9df2
	
	
	==> storage-provisioner [c89d7d6175db] <==
	I0913 23:42:38.202526       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 23:42:38.206232       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 23:42:38.206278       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 23:42:55.621658       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 23:42:55.622135       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-830000_00d494c1-f408-4802-872c-aff84f418ced!
	I0913 23:42:55.621949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"39baf3a9-7e24-459b-a151-7ef9ea60c0cd", APIVersion:"v1", ResourceVersion:"535", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-830000_00d494c1-f408-4802-872c-aff84f418ced became leader
	I0913 23:42:55.724826       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-830000_00d494c1-f408-4802-872c-aff84f418ced!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-830000 -n functional-830000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-830000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-vhgql kubernetes-dashboard-695b96c756-mfwrn
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-830000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-vhgql kubernetes-dashboard-695b96c756-mfwrn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-830000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-vhgql kubernetes-dashboard-695b96c756-mfwrn: exit status 1 (46.880791ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-830000/192.168.105.4
	Start Time:       Fri, 13 Sep 2024 16:44:20 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://801a3e9fd77bcdb8cb90098891262b162bd268b5115b1a37bf16f118476bb1e3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 13 Sep 2024 16:44:22 -0700
	      Finished:     Fri, 13 Sep 2024 16:44:22 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4znm4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-4znm4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-830000
	  Normal  Pulling    13s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     11s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.352s (1.352s including waiting). Image size: 3547125 bytes.
	  Normal  Created    11s   kubelet            Created container mount-munger
	  Normal  Started    11s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-vhgql" not found
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-mfwrn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-830000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-vhgql kubernetes-dashboard-695b96c756-mfwrn: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (35.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 node stop m02 -v=7 --alsologtostderr
E0913 16:48:42.478557    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:42.485760    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:42.499046    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:42.521356    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:42.563333    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:42.646787    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:42.810219    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:43.131868    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:43.775344    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:45.058736    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:48:47.622183    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-475000 node stop m02 -v=7 --alsologtostderr: (12.182351541s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr
E0913 16:48:52.745782    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:49:02.988487    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:49:18.657553    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:49:23.471097    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:49:46.385656    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:50:04.434168    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:51:26.356810    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr: exit status 7 (2m55.963373792s)

                                                
                                                
-- stdout --
	ha-475000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 16:48:52.025281    3553 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:48:52.025440    3553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:48:52.025444    3553 out.go:358] Setting ErrFile to fd 2...
	I0913 16:48:52.025447    3553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:48:52.025569    3553 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 16:48:52.025721    3553 out.go:352] Setting JSON to false
	I0913 16:48:52.025731    3553 mustload.go:65] Loading cluster: ha-475000
	I0913 16:48:52.025770    3553 notify.go:220] Checking for updates...
	I0913 16:48:52.025966    3553 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 16:48:52.025972    3553 status.go:255] checking status of ha-475000 ...
	I0913 16:48:52.026673    3553 status.go:330] ha-475000 host status = "Running" (err=<nil>)
	I0913 16:48:52.026679    3553 host.go:66] Checking if "ha-475000" exists ...
	I0913 16:48:52.026769    3553 host.go:66] Checking if "ha-475000" exists ...
	I0913 16:48:52.026876    3553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 16:48:52.026884    3553 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/id_rsa Username:docker}
	W0913 16:49:17.947802    3553 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0913 16:49:17.947925    3553 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 16:49:17.947944    3553 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0913 16:49:17.947954    3553 status.go:257] ha-475000 status: &{Name:ha-475000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 16:49:17.947974    3553 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0913 16:49:17.947983    3553 status.go:255] checking status of ha-475000-m02 ...
	I0913 16:49:17.948413    3553 status.go:330] ha-475000-m02 host status = "Stopped" (err=<nil>)
	I0913 16:49:17.948423    3553 status.go:343] host is not running, skipping remaining checks
	I0913 16:49:17.948428    3553 status.go:257] ha-475000-m02 status: &{Name:ha-475000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 16:49:17.948438    3553 status.go:255] checking status of ha-475000-m03 ...
	I0913 16:49:17.949545    3553 status.go:330] ha-475000-m03 host status = "Running" (err=<nil>)
	I0913 16:49:17.949556    3553 host.go:66] Checking if "ha-475000-m03" exists ...
	I0913 16:49:17.949716    3553 host.go:66] Checking if "ha-475000-m03" exists ...
	I0913 16:49:17.949852    3553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 16:49:17.949861    3553 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m03/id_rsa Username:docker}
	W0913 16:50:32.950708    3553 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0913 16:50:32.950769    3553 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0913 16:50:32.950776    3553 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0913 16:50:32.950780    3553 status.go:257] ha-475000-m03 status: &{Name:ha-475000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 16:50:32.950789    3553 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0913 16:50:32.950792    3553 status.go:255] checking status of ha-475000-m04 ...
	I0913 16:50:32.951609    3553 status.go:330] ha-475000-m04 host status = "Running" (err=<nil>)
	I0913 16:50:32.951618    3553 host.go:66] Checking if "ha-475000-m04" exists ...
	I0913 16:50:32.951715    3553 host.go:66] Checking if "ha-475000-m04" exists ...
	I0913 16:50:32.951842    3553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 16:50:32.951848    3553 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m04/id_rsa Username:docker}
	W0913 16:51:47.951265    3553 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0913 16:51:47.951389    3553 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0913 16:51:47.951439    3553 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0913 16:51:47.951456    3553 status.go:257] ha-475000-m04 status: &{Name:ha-475000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0913 16:51:47.951468    3553 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr": ha-475000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-475000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-475000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr": ha-475000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-475000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-475000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr": ha-475000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-475000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-475000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 3 (25.95932225s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 16:52:13.910810    3632 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 16:52:13.910824    3632 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.317033875s)
ha_test.go:413: expected profile "ha-475000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-475000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-475000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-475000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
E0913 16:53:42.475384    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 3 (25.954814208s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 16:53:58.182014    3684 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 16:53:58.182022    3684 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (209.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-475000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.087044917s)

                                                
                                                
-- stdout --
	* Starting "ha-475000-m02" control-plane node in "ha-475000" cluster
	* Restarting existing qemu2 VM for "ha-475000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-475000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 16:53:58.215220    3694 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:53:58.215500    3694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:53:58.215505    3694 out.go:358] Setting ErrFile to fd 2...
	I0913 16:53:58.215507    3694 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:53:58.215625    3694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 16:53:58.215878    3694 mustload.go:65] Loading cluster: ha-475000
	I0913 16:53:58.216130    3694 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0913 16:53:58.216342    3694 host.go:58] "ha-475000-m02" host status: Stopped
	I0913 16:53:58.220731    3694 out.go:177] * Starting "ha-475000-m02" control-plane node in "ha-475000" cluster
	I0913 16:53:58.223633    3694 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 16:53:58.223650    3694 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 16:53:58.223659    3694 cache.go:56] Caching tarball of preloaded images
	I0913 16:53:58.223732    3694 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 16:53:58.223738    3694 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 16:53:58.223790    3694 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/ha-475000/config.json ...
	I0913 16:53:58.224688    3694 start.go:360] acquireMachinesLock for ha-475000-m02: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 16:53:58.224752    3694 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "ha-475000-m02"
	I0913 16:53:58.224761    3694 start.go:96] Skipping create...Using existing machine configuration
	I0913 16:53:58.224766    3694 fix.go:54] fixHost starting: m02
	I0913 16:53:58.224866    3694 fix.go:112] recreateIfNeeded on ha-475000-m02: state=Stopped err=<nil>
	W0913 16:53:58.224871    3694 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 16:53:58.228559    3694 out.go:177] * Restarting existing qemu2 VM for "ha-475000-m02" ...
	I0913 16:53:58.232741    3694 qemu.go:418] Using hvf for hardware acceleration
	I0913 16:53:58.232779    3694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:99:4a:0a:1d:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/disk.qcow2
	I0913 16:53:58.235497    3694 main.go:141] libmachine: STDOUT: 
	I0913 16:53:58.235512    3694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 16:53:58.235540    3694 fix.go:56] duration metric: took 10.77275ms for fixHost
	I0913 16:53:58.235549    3694 start.go:83] releasing machines lock for "ha-475000-m02", held for 10.786958ms
	W0913 16:53:58.235555    3694 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 16:53:58.235579    3694 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 16:53:58.235582    3694 start.go:729] Will try again in 5 seconds ...
	I0913 16:54:03.237631    3694 start.go:360] acquireMachinesLock for ha-475000-m02: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 16:54:03.237740    3694 start.go:364] duration metric: took 92.208µs to acquireMachinesLock for "ha-475000-m02"
	I0913 16:54:03.237770    3694 start.go:96] Skipping create...Using existing machine configuration
	I0913 16:54:03.237775    3694 fix.go:54] fixHost starting: m02
	I0913 16:54:03.237928    3694 fix.go:112] recreateIfNeeded on ha-475000-m02: state=Stopped err=<nil>
	W0913 16:54:03.237933    3694 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 16:54:03.241448    3694 out.go:177] * Restarting existing qemu2 VM for "ha-475000-m02" ...
	I0913 16:54:03.248907    3694 qemu.go:418] Using hvf for hardware acceleration
	I0913 16:54:03.248948    3694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:99:4a:0a:1d:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/disk.qcow2
	I0913 16:54:03.250930    3694 main.go:141] libmachine: STDOUT: 
	I0913 16:54:03.250947    3694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 16:54:03.250967    3694 fix.go:56] duration metric: took 13.193292ms for fixHost
	I0913 16:54:03.250971    3694 start.go:83] releasing machines lock for "ha-475000-m02", held for 13.22675ms
	W0913 16:54:03.251003    3694 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 16:54:03.255537    3694 out.go:201] 
	W0913 16:54:03.259503    3694 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 16:54:03.259508    3694 out.go:270] * 
	* 
	W0913 16:54:03.261122    3694 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 16:54:03.265480    3694 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0913 16:53:58.215220    3694 out.go:345] Setting OutFile to fd 1 ...
I0913 16:53:58.215500    3694 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:53:58.215505    3694 out.go:358] Setting ErrFile to fd 2...
I0913 16:53:58.215507    3694 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:53:58.215625    3694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
I0913 16:53:58.215878    3694 mustload.go:65] Loading cluster: ha-475000
I0913 16:53:58.216130    3694 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W0913 16:53:58.216342    3694 host.go:58] "ha-475000-m02" host status: Stopped
I0913 16:53:58.220731    3694 out.go:177] * Starting "ha-475000-m02" control-plane node in "ha-475000" cluster
I0913 16:53:58.223633    3694 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0913 16:53:58.223650    3694 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I0913 16:53:58.223659    3694 cache.go:56] Caching tarball of preloaded images
I0913 16:53:58.223732    3694 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0913 16:53:58.223738    3694 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I0913 16:53:58.223790    3694 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/ha-475000/config.json ...
I0913 16:53:58.224688    3694 start.go:360] acquireMachinesLock for ha-475000-m02: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0913 16:53:58.224752    3694 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "ha-475000-m02"
I0913 16:53:58.224761    3694 start.go:96] Skipping create...Using existing machine configuration
I0913 16:53:58.224766    3694 fix.go:54] fixHost starting: m02
I0913 16:53:58.224866    3694 fix.go:112] recreateIfNeeded on ha-475000-m02: state=Stopped err=<nil>
W0913 16:53:58.224871    3694 fix.go:138] unexpected machine state, will restart: <nil>
I0913 16:53:58.228559    3694 out.go:177] * Restarting existing qemu2 VM for "ha-475000-m02" ...
I0913 16:53:58.232741    3694 qemu.go:418] Using hvf for hardware acceleration
I0913 16:53:58.232779    3694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:99:4a:0a:1d:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/disk.qcow2
I0913 16:53:58.235497    3694 main.go:141] libmachine: STDOUT: 
I0913 16:53:58.235512    3694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0913 16:53:58.235540    3694 fix.go:56] duration metric: took 10.77275ms for fixHost
I0913 16:53:58.235549    3694 start.go:83] releasing machines lock for "ha-475000-m02", held for 10.786958ms
W0913 16:53:58.235555    3694 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0913 16:53:58.235579    3694 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0913 16:53:58.235582    3694 start.go:729] Will try again in 5 seconds ...
I0913 16:54:03.237631    3694 start.go:360] acquireMachinesLock for ha-475000-m02: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0913 16:54:03.237740    3694 start.go:364] duration metric: took 92.208µs to acquireMachinesLock for "ha-475000-m02"
I0913 16:54:03.237770    3694 start.go:96] Skipping create...Using existing machine configuration
I0913 16:54:03.237775    3694 fix.go:54] fixHost starting: m02
I0913 16:54:03.237928    3694 fix.go:112] recreateIfNeeded on ha-475000-m02: state=Stopped err=<nil>
W0913 16:54:03.237933    3694 fix.go:138] unexpected machine state, will restart: <nil>
I0913 16:54:03.241448    3694 out.go:177] * Restarting existing qemu2 VM for "ha-475000-m02" ...
I0913 16:54:03.248907    3694 qemu.go:418] Using hvf for hardware acceleration
I0913 16:54:03.248948    3694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:99:4a:0a:1d:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m02/disk.qcow2
I0913 16:54:03.250930    3694 main.go:141] libmachine: STDOUT: 
I0913 16:54:03.250947    3694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0913 16:54:03.250967    3694 fix.go:56] duration metric: took 13.193292ms for fixHost
I0913 16:54:03.250971    3694 start.go:83] releasing machines lock for "ha-475000-m02", held for 13.22675ms
W0913 16:54:03.251003    3694 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0913 16:54:03.255537    3694 out.go:201] 
W0913 16:54:03.259503    3694 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0913 16:54:03.259508    3694 out.go:270] * 
* 
W0913 16:54:03.261122    3694 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0913 16:54:03.265480    3694 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-475000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr
E0913 16:54:10.198638    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:54:18.655293    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr: exit status 7 (2m57.936231167s)

                                                
                                                
-- stdout --
	ha-475000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-475000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 16:54:03.302924    3698 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:54:03.303116    3698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:54:03.303122    3698 out.go:358] Setting ErrFile to fd 2...
	I0913 16:54:03.303125    3698 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:54:03.303269    3698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 16:54:03.303398    3698 out.go:352] Setting JSON to false
	I0913 16:54:03.303410    3698 mustload.go:65] Loading cluster: ha-475000
	I0913 16:54:03.303467    3698 notify.go:220] Checking for updates...
	I0913 16:54:03.303646    3698 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 16:54:03.303653    3698 status.go:255] checking status of ha-475000 ...
	I0913 16:54:03.304500    3698 status.go:330] ha-475000 host status = "Running" (err=<nil>)
	I0913 16:54:03.304509    3698 host.go:66] Checking if "ha-475000" exists ...
	I0913 16:54:03.304612    3698 host.go:66] Checking if "ha-475000" exists ...
	I0913 16:54:03.304731    3698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 16:54:03.304744    3698 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/id_rsa Username:docker}
	W0913 16:54:03.304925    3698 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0913 16:54:03.304939    3698 retry.go:31] will retry after 133.672853ms: dial tcp 192.168.105.5:22: connect: host is down
	W0913 16:54:03.440737    3698 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0913 16:54:03.440754    3698 retry.go:31] will retry after 477.451037ms: dial tcp 192.168.105.5:22: connect: host is down
	W0913 16:54:03.919052    3698 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0913 16:54:03.919082    3698 retry.go:31] will retry after 362.175345ms: dial tcp 192.168.105.5:22: connect: host is down
	W0913 16:54:04.282766    3698 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0913 16:54:04.282787    3698 retry.go:31] will retry after 992.652477ms: dial tcp 192.168.105.5:22: connect: host is down
	W0913 16:54:31.195823    3698 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0913 16:54:31.195914    3698 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 16:54:31.195926    3698 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0913 16:54:31.195931    3698 status.go:257] ha-475000 status: &{Name:ha-475000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 16:54:31.195942    3698 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0913 16:54:31.195946    3698 status.go:255] checking status of ha-475000-m02 ...
	I0913 16:54:31.196193    3698 status.go:330] ha-475000-m02 host status = "Stopped" (err=<nil>)
	I0913 16:54:31.196198    3698 status.go:343] host is not running, skipping remaining checks
	I0913 16:54:31.196201    3698 status.go:257] ha-475000-m02 status: &{Name:ha-475000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 16:54:31.196206    3698 status.go:255] checking status of ha-475000-m03 ...
	I0913 16:54:31.196862    3698 status.go:330] ha-475000-m03 host status = "Running" (err=<nil>)
	I0913 16:54:31.196867    3698 host.go:66] Checking if "ha-475000-m03" exists ...
	I0913 16:54:31.196964    3698 host.go:66] Checking if "ha-475000-m03" exists ...
	I0913 16:54:31.197083    3698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 16:54:31.197088    3698 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m03/id_rsa Username:docker}
	W0913 16:55:46.196748    3698 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0913 16:55:46.196791    3698 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0913 16:55:46.196815    3698 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0913 16:55:46.196819    3698 status.go:257] ha-475000-m03 status: &{Name:ha-475000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0913 16:55:46.196827    3698 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0913 16:55:46.196831    3698 status.go:255] checking status of ha-475000-m04 ...
	I0913 16:55:46.197510    3698 status.go:330] ha-475000-m04 host status = "Running" (err=<nil>)
	I0913 16:55:46.197517    3698 host.go:66] Checking if "ha-475000-m04" exists ...
	I0913 16:55:46.197632    3698 host.go:66] Checking if "ha-475000-m04" exists ...
	I0913 16:55:46.197758    3698 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 16:55:46.197763    3698 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000-m04/id_rsa Username:docker}
	W0913 16:57:01.198670    3698 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0913 16:57:01.198876    3698 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0913 16:57:01.198921    3698 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0913 16:57:01.198946    3698 status.go:257] ha-475000-m04 status: &{Name:ha-475000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0913 16:57:01.198995    3698 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 3 (25.992983792s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0913 16:57:27.193211    3768 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0913 16:57:27.193293    3768 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (209.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-475000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-475000 -v=7 --alsologtostderr
E0913 16:59:18.652821    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 17:00:41.742732    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-475000 -v=7 --alsologtostderr: (3m49.018191125s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-475000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-475000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.223964125s)

                                                
                                                
-- stdout --
	* [ha-475000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-475000" primary control-plane node in "ha-475000" cluster
	* Restarting existing qemu2 VM for "ha-475000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-475000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:02:35.751549    4225 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:02:35.751799    4225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:02:35.751803    4225 out.go:358] Setting ErrFile to fd 2...
	I0913 17:02:35.751806    4225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:02:35.751971    4225 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:02:35.753368    4225 out.go:352] Setting JSON to false
	I0913 17:02:35.772996    4225 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3719,"bootTime":1726268436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:02:35.773065    4225 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:02:35.777840    4225 out.go:177] * [ha-475000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:02:35.784727    4225 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:02:35.784792    4225 notify.go:220] Checking for updates...
	I0913 17:02:35.791697    4225 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:02:35.794708    4225 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:02:35.797701    4225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:02:35.800693    4225 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:02:35.803726    4225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:02:35.807011    4225 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:02:35.807075    4225 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:02:35.811674    4225 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:02:35.818574    4225 start.go:297] selected driver: qemu2
	I0913 17:02:35.818581    4225 start.go:901] validating driver "qemu2" against &{Name:ha-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-475000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:02:35.818664    4225 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:02:35.821447    4225 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:02:35.821488    4225 cni.go:84] Creating CNI manager for ""
	I0913 17:02:35.821521    4225 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0913 17:02:35.821573    4225 start.go:340] cluster config:
	{Name:ha-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-475000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:02:35.825700    4225 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:02:35.831967    4225 out.go:177] * Starting "ha-475000" primary control-plane node in "ha-475000" cluster
	I0913 17:02:35.835718    4225 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:02:35.835736    4225 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:02:35.835747    4225 cache.go:56] Caching tarball of preloaded images
	I0913 17:02:35.835817    4225 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:02:35.835828    4225 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:02:35.835894    4225 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/ha-475000/config.json ...
	I0913 17:02:35.836332    4225 start.go:360] acquireMachinesLock for ha-475000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:02:35.836368    4225 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "ha-475000"
	I0913 17:02:35.836377    4225 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:02:35.836382    4225 fix.go:54] fixHost starting: 
	I0913 17:02:35.836504    4225 fix.go:112] recreateIfNeeded on ha-475000: state=Stopped err=<nil>
	W0913 17:02:35.836512    4225 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:02:35.840546    4225 out.go:177] * Restarting existing qemu2 VM for "ha-475000" ...
	I0913 17:02:35.848623    4225 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:02:35.848655    4225 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:b6:c9:56:c0:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/disk.qcow2
	I0913 17:02:35.850859    4225 main.go:141] libmachine: STDOUT: 
	I0913 17:02:35.850882    4225 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:02:35.850916    4225 fix.go:56] duration metric: took 14.536083ms for fixHost
	I0913 17:02:35.850921    4225 start.go:83] releasing machines lock for "ha-475000", held for 14.551917ms
	W0913 17:02:35.850927    4225 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:02:35.850968    4225 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:02:35.850973    4225 start.go:729] Will try again in 5 seconds ...
	I0913 17:02:40.852175    4225 start.go:360] acquireMachinesLock for ha-475000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:02:40.852576    4225 start.go:364] duration metric: took 294.334µs to acquireMachinesLock for "ha-475000"
	I0913 17:02:40.852689    4225 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:02:40.852708    4225 fix.go:54] fixHost starting: 
	I0913 17:02:40.853377    4225 fix.go:112] recreateIfNeeded on ha-475000: state=Stopped err=<nil>
	W0913 17:02:40.853402    4225 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:02:40.857986    4225 out.go:177] * Restarting existing qemu2 VM for "ha-475000" ...
	I0913 17:02:40.865793    4225 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:02:40.866085    4225 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:b6:c9:56:c0:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/disk.qcow2
	I0913 17:02:40.875589    4225 main.go:141] libmachine: STDOUT: 
	I0913 17:02:40.875647    4225 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:02:40.875737    4225 fix.go:56] duration metric: took 23.028833ms for fixHost
	I0913 17:02:40.875759    4225 start.go:83] releasing machines lock for "ha-475000", held for 23.164791ms
	W0913 17:02:40.875922    4225 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:02:40.883682    4225 out.go:201] 
	W0913 17:02:40.887785    4225 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:02:40.887836    4225 out.go:270] * 
	* 
	W0913 17:02:40.890368    4225 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:02:40.896788    4225 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-475000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-475000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 7 (33.319959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (234.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-475000 node delete m03 -v=7 --alsologtostderr: exit status 83 (41.048459ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-475000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-475000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:02:41.045228    4237 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:02:41.045494    4237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:02:41.045497    4237 out.go:358] Setting ErrFile to fd 2...
	I0913 17:02:41.045500    4237 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:02:41.045617    4237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:02:41.045810    4237 mustload.go:65] Loading cluster: ha-475000
	I0913 17:02:41.046043    4237 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0913 17:02:41.046408    4237 out.go:270] ! The control-plane node ha-475000 host is not running (will try others): state=Stopped
	! The control-plane node ha-475000 host is not running (will try others): state=Stopped
	W0913 17:02:41.046522    4237 out.go:270] ! The control-plane node ha-475000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-475000-m02 host is not running (will try others): state=Stopped
	I0913 17:02:41.050566    4237 out.go:177] * The control-plane node ha-475000-m03 host is not running: state=Stopped
	I0913 17:02:41.053609    4237 out.go:177]   To start a cluster, run: "minikube start -p ha-475000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-475000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr: exit status 7 (31.088792ms)

                                                
                                                
-- stdout --
	ha-475000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:02:41.086534    4239 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:02:41.086692    4239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:02:41.086695    4239 out.go:358] Setting ErrFile to fd 2...
	I0913 17:02:41.086698    4239 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:02:41.086827    4239 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:02:41.086948    4239 out.go:352] Setting JSON to false
	I0913 17:02:41.086957    4239 mustload.go:65] Loading cluster: ha-475000
	I0913 17:02:41.087016    4239 notify.go:220] Checking for updates...
	I0913 17:02:41.087189    4239 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:02:41.087199    4239 status.go:255] checking status of ha-475000 ...
	I0913 17:02:41.087438    4239 status.go:330] ha-475000 host status = "Stopped" (err=<nil>)
	I0913 17:02:41.087441    4239 status.go:343] host is not running, skipping remaining checks
	I0913 17:02:41.087443    4239 status.go:257] ha-475000 status: &{Name:ha-475000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 17:02:41.087453    4239 status.go:255] checking status of ha-475000-m02 ...
	I0913 17:02:41.087539    4239 status.go:330] ha-475000-m02 host status = "Stopped" (err=<nil>)
	I0913 17:02:41.087542    4239 status.go:343] host is not running, skipping remaining checks
	I0913 17:02:41.087543    4239 status.go:257] ha-475000-m02 status: &{Name:ha-475000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 17:02:41.087548    4239 status.go:255] checking status of ha-475000-m03 ...
	I0913 17:02:41.087631    4239 status.go:330] ha-475000-m03 host status = "Stopped" (err=<nil>)
	I0913 17:02:41.087634    4239 status.go:343] host is not running, skipping remaining checks
	I0913 17:02:41.087636    4239 status.go:257] ha-475000-m03 status: &{Name:ha-475000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 17:02:41.087640    4239 status.go:255] checking status of ha-475000-m04 ...
	I0913 17:02:41.087733    4239 status.go:330] ha-475000-m04 host status = "Stopped" (err=<nil>)
	I0913 17:02:41.087736    4239 status.go:343] host is not running, skipping remaining checks
	I0913 17:02:41.087737    4239 status.go:257] ha-475000-m04 status: &{Name:ha-475000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 7 (30.243667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-475000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-475000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-475000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-475000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 7 (30.033208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 stop -v=7 --alsologtostderr
E0913 17:03:42.456512    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 17:04:18.634562    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 17:05:05.542668    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-475000 stop -v=7 --alsologtostderr: (3m21.987584041s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr: exit status 7 (65.204084ms)

                                                
                                                
-- stdout --
	ha-475000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-475000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:06:03.244614    4307 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:06:03.244802    4307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:06:03.244808    4307 out.go:358] Setting ErrFile to fd 2...
	I0913 17:06:03.244811    4307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:06:03.245000    4307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:06:03.245156    4307 out.go:352] Setting JSON to false
	I0913 17:06:03.245166    4307 mustload.go:65] Loading cluster: ha-475000
	I0913 17:06:03.245200    4307 notify.go:220] Checking for updates...
	I0913 17:06:03.245475    4307 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:06:03.245483    4307 status.go:255] checking status of ha-475000 ...
	I0913 17:06:03.245797    4307 status.go:330] ha-475000 host status = "Stopped" (err=<nil>)
	I0913 17:06:03.245801    4307 status.go:343] host is not running, skipping remaining checks
	I0913 17:06:03.245804    4307 status.go:257] ha-475000 status: &{Name:ha-475000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 17:06:03.245816    4307 status.go:255] checking status of ha-475000-m02 ...
	I0913 17:06:03.245950    4307 status.go:330] ha-475000-m02 host status = "Stopped" (err=<nil>)
	I0913 17:06:03.245956    4307 status.go:343] host is not running, skipping remaining checks
	I0913 17:06:03.245958    4307 status.go:257] ha-475000-m02 status: &{Name:ha-475000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 17:06:03.245964    4307 status.go:255] checking status of ha-475000-m03 ...
	I0913 17:06:03.246089    4307 status.go:330] ha-475000-m03 host status = "Stopped" (err=<nil>)
	I0913 17:06:03.246094    4307 status.go:343] host is not running, skipping remaining checks
	I0913 17:06:03.246097    4307 status.go:257] ha-475000-m03 status: &{Name:ha-475000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 17:06:03.246102    4307 status.go:255] checking status of ha-475000-m04 ...
	I0913 17:06:03.246230    4307 status.go:330] ha-475000-m04 host status = "Stopped" (err=<nil>)
	I0913 17:06:03.246234    4307 status.go:343] host is not running, skipping remaining checks
	I0913 17:06:03.246237    4307 status.go:257] ha-475000-m04 status: &{Name:ha-475000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr": ha-475000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr": ha-475000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr": ha-475000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-475000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 7 (32.355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (202.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-475000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-475000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.185473208s)

                                                
                                                
-- stdout --
	* [ha-475000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-475000" primary control-plane node in "ha-475000" cluster
	* Restarting existing qemu2 VM for "ha-475000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-475000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:06:03.307885    4311 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:06:03.308032    4311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:06:03.308035    4311 out.go:358] Setting ErrFile to fd 2...
	I0913 17:06:03.308037    4311 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:06:03.308170    4311 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:06:03.309203    4311 out.go:352] Setting JSON to false
	I0913 17:06:03.325542    4311 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3927,"bootTime":1726268436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:06:03.325641    4311 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:06:03.330486    4311 out.go:177] * [ha-475000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:06:03.337256    4311 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:06:03.337286    4311 notify.go:220] Checking for updates...
	I0913 17:06:03.345422    4311 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:06:03.349404    4311 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:06:03.352443    4311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:06:03.355479    4311 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:06:03.358501    4311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:06:03.361670    4311 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:06:03.361915    4311 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:06:03.366473    4311 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:06:03.373405    4311 start.go:297] selected driver: qemu2
	I0913 17:06:03.373411    4311 start.go:901] validating driver "qemu2" against &{Name:ha-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-475000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:06:03.373522    4311 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:06:03.375844    4311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:06:03.375870    4311 cni.go:84] Creating CNI manager for ""
	I0913 17:06:03.375894    4311 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0913 17:06:03.375955    4311 start.go:340] cluster config:
	{Name:ha-475000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-475000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:06:03.379462    4311 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:06:03.388429    4311 out.go:177] * Starting "ha-475000" primary control-plane node in "ha-475000" cluster
	I0913 17:06:03.392444    4311 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:06:03.392456    4311 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:06:03.392463    4311 cache.go:56] Caching tarball of preloaded images
	I0913 17:06:03.392513    4311 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:06:03.392518    4311 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:06:03.392576    4311 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/ha-475000/config.json ...
	I0913 17:06:03.393029    4311 start.go:360] acquireMachinesLock for ha-475000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:06:03.393062    4311 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "ha-475000"
	I0913 17:06:03.393073    4311 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:06:03.393079    4311 fix.go:54] fixHost starting: 
	I0913 17:06:03.393190    4311 fix.go:112] recreateIfNeeded on ha-475000: state=Stopped err=<nil>
	W0913 17:06:03.393198    4311 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:06:03.397383    4311 out.go:177] * Restarting existing qemu2 VM for "ha-475000" ...
	I0913 17:06:03.404346    4311 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:06:03.404393    4311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:b6:c9:56:c0:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/disk.qcow2
	I0913 17:06:03.406423    4311 main.go:141] libmachine: STDOUT: 
	I0913 17:06:03.406442    4311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:06:03.406471    4311 fix.go:56] duration metric: took 13.392792ms for fixHost
	I0913 17:06:03.406474    4311 start.go:83] releasing machines lock for "ha-475000", held for 13.408542ms
	W0913 17:06:03.406479    4311 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:06:03.406511    4311 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:06:03.406516    4311 start.go:729] Will try again in 5 seconds ...
	I0913 17:06:08.407580    4311 start.go:360] acquireMachinesLock for ha-475000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:06:08.407932    4311 start.go:364] duration metric: took 283.792µs to acquireMachinesLock for "ha-475000"
	I0913 17:06:08.408103    4311 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:06:08.408122    4311 fix.go:54] fixHost starting: 
	I0913 17:06:08.408826    4311 fix.go:112] recreateIfNeeded on ha-475000: state=Stopped err=<nil>
	W0913 17:06:08.408854    4311 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:06:08.417110    4311 out.go:177] * Restarting existing qemu2 VM for "ha-475000" ...
	I0913 17:06:08.420291    4311 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:06:08.420457    4311 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:b6:c9:56:c0:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/ha-475000/disk.qcow2
	I0913 17:06:08.430178    4311 main.go:141] libmachine: STDOUT: 
	I0913 17:06:08.430276    4311 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:06:08.430370    4311 fix.go:56] duration metric: took 22.246375ms for fixHost
	I0913 17:06:08.430389    4311 start.go:83] releasing machines lock for "ha-475000", held for 22.434666ms
	W0913 17:06:08.430611    4311 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-475000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:06:08.439221    4311 out.go:201] 
	W0913 17:06:08.443284    4311 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:06:08.443310    4311 out.go:270] * 
	* 
	W0913 17:06:08.445624    4311 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:06:08.453283    4311 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-475000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 7 (70.131375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-475000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-475000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-475000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-475000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 7 (30.6155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-475000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-475000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.897458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-475000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-475000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:06:08.647915    4326 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:06:08.648463    4326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:06:08.648467    4326 out.go:358] Setting ErrFile to fd 2...
	I0913 17:06:08.648470    4326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:06:08.648613    4326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:06:08.648830    4326 mustload.go:65] Loading cluster: ha-475000
	I0913 17:06:08.649074    4326 config.go:182] Loaded profile config "ha-475000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W0913 17:06:08.649375    4326 out.go:270] ! The control-plane node ha-475000 host is not running (will try others): state=Stopped
	! The control-plane node ha-475000 host is not running (will try others): state=Stopped
	W0913 17:06:08.649476    4326 out.go:270] ! The control-plane node ha-475000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-475000-m02 host is not running (will try others): state=Stopped
	I0913 17:06:08.653322    4326 out.go:177] * The control-plane node ha-475000-m03 host is not running: state=Stopped
	I0913 17:06:08.657190    4326 out.go:177]   To start a cluster, run: "minikube start -p ha-475000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-475000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-475000 -n ha-475000: exit status 7 (30.224875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-018000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-018000 --driver=qemu2 : exit status 80 (10.108133375s)

                                                
                                                
-- stdout --
	* [image-018000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-018000" primary control-plane node in "image-018000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-018000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-018000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-018000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-018000 -n image-018000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-018000 -n image-018000: exit status 7 (70.395292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-018000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.18s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-014000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-014000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.78435125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f6be9418-0493-4a42-b028-49ec8661a2f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-014000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed34226e-b1df-45ef-b690-12a6eb36dd9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"6cc4914a-9b91-438f-b3de-29b4d04496e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig"}}
	{"specversion":"1.0","id":"3ee937c0-bba6-4c60-aad8-eeea449a75e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1afdda19-48b5-4c8c-b0e6-2cb470e9d1a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"20865a19-0918-43f4-a4bd-ceed27b1861f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube"}}
	{"specversion":"1.0","id":"09b737f7-8bf9-465a-9715-85a740a04f3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"09ece5f2-1719-407e-983b-9c67376c8300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d05036cd-2e15-4309-9905-8b806f6b5f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"dbe46050-26cb-483f-aab2-28e3e0bcce52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-014000\" primary control-plane node in \"json-output-014000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2764c67f-11c3-40fe-88b4-867ae5a0d534","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"722f8d74-db4f-43cc-aa7c-81f84ef0ba29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-014000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"d02d8573-2f47-41b4-82df-9b901ea823bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"bc062fc4-c43c-4b8b-b1fe-bca538c4c467","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"aa8c83cb-61b9-4d3b-8089-534f16dd35c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-014000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a10f2ffd-91aa-4086-976e-2f7444ad6c86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"3145656f-2dce-4462-9930-3a013b0bc944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-014000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-014000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-014000 --output=json --user=testUser: exit status 83 (78.461ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d77e15f1-1dda-41ee-a67f-3ce62dfbe1d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-014000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"aba83799-6633-4b6b-81d2-6312adcc4c50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-014000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-014000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-014000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-014000 --output=json --user=testUser: exit status 83 (46.329542ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-014000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-014000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-014000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-677000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-677000 --driver=qemu2 : exit status 80 (10.049254291s)

                                                
                                                
-- stdout --
	* [first-677000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-677000" primary control-plane node in "first-677000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-677000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-677000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-677000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-13 17:06:41.863386 -0700 PDT m=+2469.362754126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-678000 -n second-678000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-678000 -n second-678000: exit status 85 (70.658709ms)

                                                
                                                
-- stdout --
	* Profile "second-678000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-678000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-678000" host is not running, skipping log retrieval (state="* Profile \"second-678000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-678000\"")
helpers_test.go:175: Cleaning up "second-678000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-678000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-09-13 17:06:42.041053 -0700 PDT m=+2469.540423543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-677000 -n first-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-677000 -n first-677000: exit status 7 (30.389541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-677000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-677000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-677000
--- FAIL: TestMinikubeProfile (10.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-178000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-178000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.986816542s)

                                                
                                                
-- stdout --
	* [mount-start-1-178000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-178000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-178000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-178000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-178000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-178000 -n mount-start-1-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-178000 -n mount-start-1-178000: exit status 7 (67.391417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-984000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-984000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.9361945s)

                                                
                                                
-- stdout --
	* [multinode-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-984000" primary control-plane node in "multinode-984000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-984000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:06:52.424471    4466 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:06:52.424607    4466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:06:52.424611    4466 out.go:358] Setting ErrFile to fd 2...
	I0913 17:06:52.424613    4466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:06:52.424744    4466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:06:52.425801    4466 out.go:352] Setting JSON to false
	I0913 17:06:52.442088    4466 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3976,"bootTime":1726268436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:06:52.442165    4466 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:06:52.448227    4466 out.go:177] * [multinode-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:06:52.456991    4466 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:06:52.457029    4466 notify.go:220] Checking for updates...
	I0913 17:06:52.463922    4466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:06:52.467007    4466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:06:52.470023    4466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:06:52.471610    4466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:06:52.475025    4466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:06:52.478150    4466 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:06:52.481866    4466 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:06:52.488993    4466 start.go:297] selected driver: qemu2
	I0913 17:06:52.488998    4466 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:06:52.489004    4466 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:06:52.491377    4466 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:06:52.495859    4466 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:06:52.499055    4466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:06:52.499071    4466 cni.go:84] Creating CNI manager for ""
	I0913 17:06:52.499089    4466 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0913 17:06:52.499094    4466 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 17:06:52.499122    4466 start.go:340] cluster config:
	{Name:multinode-984000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:06:52.502866    4466 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:06:52.510917    4466 out.go:177] * Starting "multinode-984000" primary control-plane node in "multinode-984000" cluster
	I0913 17:06:52.515064    4466 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:06:52.515081    4466 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:06:52.515095    4466 cache.go:56] Caching tarball of preloaded images
	I0913 17:06:52.515175    4466 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:06:52.515181    4466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:06:52.515392    4466 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/multinode-984000/config.json ...
	I0913 17:06:52.515406    4466 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/multinode-984000/config.json: {Name:mkbff991ad44521a8e2d7504fcf1f28c3932ae4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:06:52.515633    4466 start.go:360] acquireMachinesLock for multinode-984000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:06:52.515667    4466 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "multinode-984000"
	I0913 17:06:52.515678    4466 start.go:93] Provisioning new machine with config: &{Name:multinode-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:06:52.515709    4466 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:06:52.524002    4466 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:06:52.541609    4466 start.go:159] libmachine.API.Create for "multinode-984000" (driver="qemu2")
	I0913 17:06:52.541636    4466 client.go:168] LocalClient.Create starting
	I0913 17:06:52.541694    4466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:06:52.541722    4466 main.go:141] libmachine: Decoding PEM data...
	I0913 17:06:52.541731    4466 main.go:141] libmachine: Parsing certificate...
	I0913 17:06:52.541768    4466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:06:52.541790    4466 main.go:141] libmachine: Decoding PEM data...
	I0913 17:06:52.541800    4466 main.go:141] libmachine: Parsing certificate...
	I0913 17:06:52.542117    4466 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:06:52.702461    4466 main.go:141] libmachine: Creating SSH key...
	I0913 17:06:52.819433    4466 main.go:141] libmachine: Creating Disk image...
	I0913 17:06:52.819438    4466 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:06:52.819601    4466 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:06:52.828944    4466 main.go:141] libmachine: STDOUT: 
	I0913 17:06:52.828958    4466 main.go:141] libmachine: STDERR: 
	I0913 17:06:52.829025    4466 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2 +20000M
	I0913 17:06:52.836822    4466 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:06:52.836838    4466 main.go:141] libmachine: STDERR: 
	I0913 17:06:52.836852    4466 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:06:52.836855    4466 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:06:52.836868    4466 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:06:52.836906    4466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:33:13:6b:d7:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:06:52.838526    4466 main.go:141] libmachine: STDOUT: 
	I0913 17:06:52.838539    4466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:06:52.838560    4466 client.go:171] duration metric: took 296.920375ms to LocalClient.Create
	I0913 17:06:54.840708    4466 start.go:128] duration metric: took 2.325012041s to createHost
	I0913 17:06:54.840784    4466 start.go:83] releasing machines lock for "multinode-984000", held for 2.325142458s
	W0913 17:06:54.840834    4466 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:06:54.851937    4466 out.go:177] * Deleting "multinode-984000" in qemu2 ...
	W0913 17:06:54.897368    4466 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:06:54.897389    4466 start.go:729] Will try again in 5 seconds ...
	I0913 17:06:59.899523    4466 start.go:360] acquireMachinesLock for multinode-984000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:06:59.899962    4466 start.go:364] duration metric: took 347.708µs to acquireMachinesLock for "multinode-984000"
	I0913 17:06:59.900082    4466 start.go:93] Provisioning new machine with config: &{Name:multinode-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:06:59.900392    4466 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:06:59.919883    4466 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:06:59.970184    4466 start.go:159] libmachine.API.Create for "multinode-984000" (driver="qemu2")
	I0913 17:06:59.970231    4466 client.go:168] LocalClient.Create starting
	I0913 17:06:59.970341    4466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:06:59.970423    4466 main.go:141] libmachine: Decoding PEM data...
	I0913 17:06:59.970440    4466 main.go:141] libmachine: Parsing certificate...
	I0913 17:06:59.970499    4466 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:06:59.970543    4466 main.go:141] libmachine: Decoding PEM data...
	I0913 17:06:59.970567    4466 main.go:141] libmachine: Parsing certificate...
	I0913 17:06:59.971183    4466 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:07:00.139891    4466 main.go:141] libmachine: Creating SSH key...
	I0913 17:07:00.259326    4466 main.go:141] libmachine: Creating Disk image...
	I0913 17:07:00.259332    4466 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:07:00.259495    4466 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:07:00.268789    4466 main.go:141] libmachine: STDOUT: 
	I0913 17:07:00.268807    4466 main.go:141] libmachine: STDERR: 
	I0913 17:07:00.268877    4466 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2 +20000M
	I0913 17:07:00.276805    4466 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:07:00.276818    4466 main.go:141] libmachine: STDERR: 
	I0913 17:07:00.276835    4466 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:07:00.276840    4466 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:07:00.276852    4466 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:07:00.276883    4466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:49:4d:32:90:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:07:00.278519    4466 main.go:141] libmachine: STDOUT: 
	I0913 17:07:00.278535    4466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:07:00.278548    4466 client.go:171] duration metric: took 308.317167ms to LocalClient.Create
	I0913 17:07:02.280678    4466 start.go:128] duration metric: took 2.380288458s to createHost
	I0913 17:07:02.280791    4466 start.go:83] releasing machines lock for "multinode-984000", held for 2.38081025s
	W0913 17:07:02.281143    4466 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-984000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:07:02.296829    4466 out.go:201] 
	W0913 17:07:02.300907    4466 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:07:02.300941    4466 out.go:270] * 
	* 
	W0913 17:07:02.303560    4466 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:07:02.319690    4466 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-984000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (70.133167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (78.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (131.387042ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-984000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- rollout status deployment/busybox: exit status 1 (59.204833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.4335ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.757625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.420542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.030167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.460209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.018792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.357208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.862292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.722917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.194875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.34125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.093583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.771042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.245708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.027083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (30.321791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (78.97s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-984000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.299291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (30.682458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-984000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-984000 -v 3 --alsologtostderr: exit status 83 (42.46225ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-984000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-984000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:21.493261    4556 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:21.493421    4556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.493425    4556 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:21.493432    4556 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.493560    4556 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:21.493793    4556 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:21.494000    4556 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:21.499186    4556 out.go:177] * The control-plane node multinode-984000 host is not running: state=Stopped
	I0913 17:08:21.502179    4556 out.go:177]   To start a cluster, run: "minikube start -p multinode-984000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-984000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (30.18775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-984000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-984000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.017875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-984000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-984000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-984000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (30.2535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-984000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-984000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-984000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-984000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (30.4365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status --output json --alsologtostderr: exit status 7 (30.926583ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-984000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:21.701189    4568 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:21.701353    4568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.701357    4568 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:21.701359    4568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.701488    4568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:21.701625    4568 out.go:352] Setting JSON to true
	I0913 17:08:21.701641    4568 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:21.701684    4568 notify.go:220] Checking for updates...
	I0913 17:08:21.701874    4568 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:21.701880    4568 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:21.702111    4568 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:21.702115    4568 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:21.702117    4568 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-984000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (30.693042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 node stop m03: exit status 85 (48.584792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-984000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status: exit status 7 (30.336292ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr: exit status 7 (30.560917ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:21.842319    4576 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:21.842482    4576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.842485    4576 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:21.842488    4576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.842622    4576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:21.842741    4576 out.go:352] Setting JSON to false
	I0913 17:08:21.842749    4576 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:21.842804    4576 notify.go:220] Checking for updates...
	I0913 17:08:21.842958    4576 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:21.842967    4576 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:21.843207    4576 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:21.843211    4576 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:21.843213    4576 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr": multinode-984000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (30.694084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 node start m03 -v=7 --alsologtostderr: exit status 85 (45.859417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:21.903183    4580 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:21.903422    4580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.903426    4580 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:21.903428    4580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.903559    4580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:21.903813    4580 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:21.903998    4580 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:21.908151    4580 out.go:201] 
	W0913 17:08:21.911211    4580 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0913 17:08:21.911217    4580 out.go:270] * 
	* 
	W0913 17:08:21.912901    4580 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:08:21.916068    4580 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0913 17:08:21.903183    4580 out.go:345] Setting OutFile to fd 1 ...
I0913 17:08:21.903422    4580 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 17:08:21.903426    4580 out.go:358] Setting ErrFile to fd 2...
I0913 17:08:21.903428    4580 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 17:08:21.903559    4580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
I0913 17:08:21.903813    4580 mustload.go:65] Loading cluster: multinode-984000
I0913 17:08:21.903998    4580 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 17:08:21.908151    4580 out.go:201] 
W0913 17:08:21.911211    4580 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0913 17:08:21.911217    4580 out.go:270] * 
* 
W0913 17:08:21.912901    4580 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0913 17:08:21.916068    4580 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-984000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr: exit status 7 (30.858208ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:21.950325    4582 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:21.950468    4582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.950472    4582 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:21.950475    4582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:21.950609    4582 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:21.950716    4582 out.go:352] Setting JSON to false
	I0913 17:08:21.950724    4582 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:21.950780    4582 notify.go:220] Checking for updates...
	I0913 17:08:21.950923    4582 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:21.950929    4582 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:21.951162    4582 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:21.951165    4582 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:21.951167    4582 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr: exit status 7 (74.482375ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:22.731622    4584 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:22.731823    4584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:22.731827    4584 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:22.731831    4584 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:22.731987    4584 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:22.732144    4584 out.go:352] Setting JSON to false
	I0913 17:08:22.732155    4584 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:22.732192    4584 notify.go:220] Checking for updates...
	I0913 17:08:22.732432    4584 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:22.732442    4584 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:22.732748    4584 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:22.732753    4584 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:22.732756    4584 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr: exit status 7 (72.154708ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:24.175988    4586 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:24.176186    4586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:24.176190    4586 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:24.176193    4586 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:24.176355    4586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:24.176507    4586 out.go:352] Setting JSON to false
	I0913 17:08:24.176517    4586 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:24.176567    4586 notify.go:220] Checking for updates...
	I0913 17:08:24.176800    4586 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:24.176807    4586 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:24.177134    4586 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:24.177139    4586 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:24.177142    4586 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr: exit status 7 (74.999334ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:26.330997    4588 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:26.331242    4588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:26.331246    4588 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:26.331250    4588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:26.331422    4588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:26.331574    4588 out.go:352] Setting JSON to false
	I0913 17:08:26.331586    4588 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:26.331641    4588 notify.go:220] Checking for updates...
	I0913 17:08:26.331851    4588 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:26.331859    4588 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:26.332177    4588 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:26.332182    4588 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:26.332185    4588 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr: exit status 7 (72.659708ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:28.451570    4590 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:28.451799    4590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:28.451803    4590 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:28.451806    4590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:28.451975    4590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:28.452146    4590 out.go:352] Setting JSON to false
	I0913 17:08:28.452157    4590 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:28.452195    4590 notify.go:220] Checking for updates...
	I0913 17:08:28.452433    4590 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:28.452440    4590 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:28.452755    4590 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:28.452760    4590 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:28.452763    4590 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr: exit status 7 (71.416291ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:31.533148    4597 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:31.533365    4597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:31.533370    4597 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:31.533373    4597 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:31.533531    4597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:31.533703    4597 out.go:352] Setting JSON to false
	I0913 17:08:31.533717    4597 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:31.533767    4597 notify.go:220] Checking for updates...
	I0913 17:08:31.533980    4597 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:31.533992    4597 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:31.534306    4597 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:31.534311    4597 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:31.534314    4597 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr: exit status 7 (75.178125ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:41.624034    4599 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:41.624249    4599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:41.624256    4599 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:41.624258    4599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:41.624413    4599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:41.624557    4599 out.go:352] Setting JSON to false
	I0913 17:08:41.624568    4599 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:41.624601    4599 notify.go:220] Checking for updates...
	I0913 17:08:41.624839    4599 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:41.624848    4599 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:41.625172    4599 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:41.625177    4599 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:41.625180    4599 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0913 17:08:42.451982    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr: exit status 7 (75.419042ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:08:54.682650    4604 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:08:54.682857    4604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:54.682862    4604 out.go:358] Setting ErrFile to fd 2...
	I0913 17:08:54.682865    4604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:08:54.683028    4604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:08:54.683175    4604 out.go:352] Setting JSON to false
	I0913 17:08:54.683185    4604 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:08:54.683226    4604 notify.go:220] Checking for updates...
	I0913 17:08:54.683459    4604 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:08:54.683467    4604 status.go:255] checking status of multinode-984000 ...
	I0913 17:08:54.683785    4604 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:08:54.683790    4604 status.go:343] host is not running, skipping remaining checks
	I0913 17:08:54.683793    4604 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr: exit status 7 (71.514041ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:09:15.839433    4614 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:09:15.839632    4614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:15.839636    4614 out.go:358] Setting ErrFile to fd 2...
	I0913 17:09:15.839640    4614 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:15.839786    4614 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:09:15.839951    4614 out.go:352] Setting JSON to false
	I0913 17:09:15.839969    4614 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:09:15.840005    4614 notify.go:220] Checking for updates...
	I0913 17:09:15.840269    4614 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:09:15.840276    4614 status.go:255] checking status of multinode-984000 ...
	I0913 17:09:15.840605    4614 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:09:15.840610    4614 status.go:343] host is not running, skipping remaining checks
	I0913 17:09:15.840613    4614 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-984000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (33.148125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (54.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-984000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-984000
E0913 17:09:18.632048    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-984000: (3.163852083s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-984000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-984000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.222426042s)

                                                
                                                
-- stdout --
	* [multinode-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-984000" primary control-plane node in "multinode-984000" cluster
	* Restarting existing qemu2 VM for "multinode-984000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-984000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:09:19.132199    4638 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:09:19.132354    4638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:19.132358    4638 out.go:358] Setting ErrFile to fd 2...
	I0913 17:09:19.132362    4638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:19.132525    4638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:09:19.133749    4638 out.go:352] Setting JSON to false
	I0913 17:09:19.153023    4638 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4123,"bootTime":1726268436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:09:19.153100    4638 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:09:19.156837    4638 out.go:177] * [multinode-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:09:19.165716    4638 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:09:19.165745    4638 notify.go:220] Checking for updates...
	I0913 17:09:19.172608    4638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:09:19.175728    4638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:09:19.178712    4638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:09:19.181626    4638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:09:19.184703    4638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:09:19.187964    4638 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:09:19.188014    4638 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:09:19.191652    4638 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:09:19.198708    4638 start.go:297] selected driver: qemu2
	I0913 17:09:19.198714    4638 start.go:901] validating driver "qemu2" against &{Name:multinode-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:09:19.198762    4638 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:09:19.201151    4638 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:09:19.201175    4638 cni.go:84] Creating CNI manager for ""
	I0913 17:09:19.201204    4638 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 17:09:19.201259    4638 start.go:340] cluster config:
	{Name:multinode-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-984000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:09:19.205232    4638 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:19.212765    4638 out.go:177] * Starting "multinode-984000" primary control-plane node in "multinode-984000" cluster
	I0913 17:09:19.216751    4638 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:09:19.216768    4638 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:09:19.216779    4638 cache.go:56] Caching tarball of preloaded images
	I0913 17:09:19.216849    4638 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:09:19.216856    4638 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:09:19.216915    4638 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/multinode-984000/config.json ...
	I0913 17:09:19.217393    4638 start.go:360] acquireMachinesLock for multinode-984000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:09:19.217429    4638 start.go:364] duration metric: took 30.167µs to acquireMachinesLock for "multinode-984000"
	I0913 17:09:19.217439    4638 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:09:19.217444    4638 fix.go:54] fixHost starting: 
	I0913 17:09:19.217577    4638 fix.go:112] recreateIfNeeded on multinode-984000: state=Stopped err=<nil>
	W0913 17:09:19.217588    4638 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:09:19.225691    4638 out.go:177] * Restarting existing qemu2 VM for "multinode-984000" ...
	I0913 17:09:19.229643    4638 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:09:19.229677    4638 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:49:4d:32:90:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:09:19.231814    4638 main.go:141] libmachine: STDOUT: 
	I0913 17:09:19.231830    4638 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:09:19.231864    4638 fix.go:56] duration metric: took 14.420333ms for fixHost
	I0913 17:09:19.231870    4638 start.go:83] releasing machines lock for "multinode-984000", held for 14.436125ms
	W0913 17:09:19.231875    4638 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:09:19.231911    4638 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:09:19.231916    4638 start.go:729] Will try again in 5 seconds ...
	I0913 17:09:24.234075    4638 start.go:360] acquireMachinesLock for multinode-984000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:09:24.234450    4638 start.go:364] duration metric: took 289.625µs to acquireMachinesLock for "multinode-984000"
	I0913 17:09:24.234572    4638 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:09:24.234589    4638 fix.go:54] fixHost starting: 
	I0913 17:09:24.235280    4638 fix.go:112] recreateIfNeeded on multinode-984000: state=Stopped err=<nil>
	W0913 17:09:24.235305    4638 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:09:24.243823    4638 out.go:177] * Restarting existing qemu2 VM for "multinode-984000" ...
	I0913 17:09:24.247728    4638 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:09:24.247949    4638 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:49:4d:32:90:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:09:24.256873    4638 main.go:141] libmachine: STDOUT: 
	I0913 17:09:24.256942    4638 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:09:24.257066    4638 fix.go:56] duration metric: took 22.47425ms for fixHost
	I0913 17:09:24.257089    4638 start.go:83] releasing machines lock for "multinode-984000", held for 22.616458ms
	W0913 17:09:24.257305    4638 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-984000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-984000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:09:24.263568    4638 out.go:201] 
	W0913 17:09:24.267819    4638 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:09:24.267882    4638 out.go:270] * 
	* 
	W0913 17:09:24.270382    4638 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:09:24.278770    4638 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-984000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-984000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (32.964625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 node delete m03: exit status 83 (40.530125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-984000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-984000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-984000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr: exit status 7 (30.125125ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:09:24.463568    4652 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:09:24.463716    4652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:24.463719    4652 out.go:358] Setting ErrFile to fd 2...
	I0913 17:09:24.463721    4652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:24.463845    4652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:09:24.463981    4652 out.go:352] Setting JSON to false
	I0913 17:09:24.463990    4652 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:09:24.464054    4652 notify.go:220] Checking for updates...
	I0913 17:09:24.464192    4652 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:09:24.464199    4652 status.go:255] checking status of multinode-984000 ...
	I0913 17:09:24.464420    4652 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:09:24.464424    4652 status.go:343] host is not running, skipping remaining checks
	I0913 17:09:24.464426    4652 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (30.273959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-984000 stop: (1.892492458s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status: exit status 7 (63.312208ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr: exit status 7 (32.802417ms)

                                                
                                                
-- stdout --
	multinode-984000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:09:26.482995    4668 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:09:26.483131    4668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:26.483135    4668 out.go:358] Setting ErrFile to fd 2...
	I0913 17:09:26.483138    4668 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:26.483265    4668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:09:26.483390    4668 out.go:352] Setting JSON to false
	I0913 17:09:26.483400    4668 mustload.go:65] Loading cluster: multinode-984000
	I0913 17:09:26.483445    4668 notify.go:220] Checking for updates...
	I0913 17:09:26.483603    4668 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:09:26.483609    4668 status.go:255] checking status of multinode-984000 ...
	I0913 17:09:26.483850    4668 status.go:330] multinode-984000 host status = "Stopped" (err=<nil>)
	I0913 17:09:26.483854    4668 status.go:343] host is not running, skipping remaining checks
	I0913 17:09:26.483856    4668 status.go:257] multinode-984000 status: &{Name:multinode-984000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr": multinode-984000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-984000 status --alsologtostderr": multinode-984000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (30.452625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-984000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-984000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.184160458s)

                                                
                                                
-- stdout --
	* [multinode-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-984000" primary control-plane node in "multinode-984000" cluster
	* Restarting existing qemu2 VM for "multinode-984000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-984000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:09:26.543534    4672 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:09:26.543661    4672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:26.543664    4672 out.go:358] Setting ErrFile to fd 2...
	I0913 17:09:26.543667    4672 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:26.543809    4672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:09:26.544852    4672 out.go:352] Setting JSON to false
	I0913 17:09:26.560959    4672 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4130,"bootTime":1726268436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:09:26.561033    4672 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:09:26.564785    4672 out.go:177] * [multinode-984000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:09:26.571751    4672 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:09:26.571831    4672 notify.go:220] Checking for updates...
	I0913 17:09:26.579738    4672 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:09:26.582719    4672 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:09:26.585648    4672 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:09:26.588717    4672 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:09:26.591722    4672 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:09:26.595021    4672 config.go:182] Loaded profile config "multinode-984000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:09:26.595270    4672 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:09:26.599631    4672 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:09:26.606712    4672 start.go:297] selected driver: qemu2
	I0913 17:09:26.606717    4672 start.go:901] validating driver "qemu2" against &{Name:multinode-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-984000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:09:26.606783    4672 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:09:26.609239    4672 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:09:26.609261    4672 cni.go:84] Creating CNI manager for ""
	I0913 17:09:26.609279    4672 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0913 17:09:26.609316    4672 start.go:340] cluster config:
	{Name:multinode-984000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-984000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:09:26.612751    4672 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:26.620702    4672 out.go:177] * Starting "multinode-984000" primary control-plane node in "multinode-984000" cluster
	I0913 17:09:26.624723    4672 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:09:26.624738    4672 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:09:26.624750    4672 cache.go:56] Caching tarball of preloaded images
	I0913 17:09:26.624807    4672 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:09:26.624812    4672 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:09:26.624871    4672 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/multinode-984000/config.json ...
	I0913 17:09:26.625309    4672 start.go:360] acquireMachinesLock for multinode-984000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:09:26.625335    4672 start.go:364] duration metric: took 20.75µs to acquireMachinesLock for "multinode-984000"
	I0913 17:09:26.625343    4672 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:09:26.625349    4672 fix.go:54] fixHost starting: 
	I0913 17:09:26.625463    4672 fix.go:112] recreateIfNeeded on multinode-984000: state=Stopped err=<nil>
	W0913 17:09:26.625471    4672 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:09:26.633680    4672 out.go:177] * Restarting existing qemu2 VM for "multinode-984000" ...
	I0913 17:09:26.637630    4672 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:09:26.637664    4672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:49:4d:32:90:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:09:26.639528    4672 main.go:141] libmachine: STDOUT: 
	I0913 17:09:26.639546    4672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:09:26.639575    4672 fix.go:56] duration metric: took 14.226291ms for fixHost
	I0913 17:09:26.639579    4672 start.go:83] releasing machines lock for "multinode-984000", held for 14.240167ms
	W0913 17:09:26.639584    4672 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:09:26.639617    4672 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:09:26.639621    4672 start.go:729] Will try again in 5 seconds ...
	I0913 17:09:31.641724    4672 start.go:360] acquireMachinesLock for multinode-984000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:09:31.642219    4672 start.go:364] duration metric: took 405.917µs to acquireMachinesLock for "multinode-984000"
	I0913 17:09:31.642378    4672 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:09:31.642399    4672 fix.go:54] fixHost starting: 
	I0913 17:09:31.643091    4672 fix.go:112] recreateIfNeeded on multinode-984000: state=Stopped err=<nil>
	W0913 17:09:31.643118    4672 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:09:31.647726    4672 out.go:177] * Restarting existing qemu2 VM for "multinode-984000" ...
	I0913 17:09:31.655624    4672 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:09:31.655780    4672 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:49:4d:32:90:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/multinode-984000/disk.qcow2
	I0913 17:09:31.665264    4672 main.go:141] libmachine: STDOUT: 
	I0913 17:09:31.665329    4672 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:09:31.665450    4672 fix.go:56] duration metric: took 23.025541ms for fixHost
	I0913 17:09:31.665469    4672 start.go:83] releasing machines lock for "multinode-984000", held for 23.225792ms
	W0913 17:09:31.665656    4672 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-984000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-984000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:09:31.672705    4672 out.go:201] 
	W0913 17:09:31.676618    4672 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:09:31.676643    4672 out.go:270] * 
	* 
	W0913 17:09:31.679464    4672 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:09:31.686625    4672 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-984000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (69.473625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-984000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-984000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-984000-m01 --driver=qemu2 : exit status 80 (9.977057s)

                                                
                                                
-- stdout --
	* [multinode-984000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-984000-m01" primary control-plane node in "multinode-984000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-984000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-984000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-984000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-984000-m02 --driver=qemu2 : exit status 80 (10.024223125s)

                                                
                                                
-- stdout --
	* [multinode-984000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-984000-m02" primary control-plane node in "multinode-984000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-984000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-984000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-984000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-984000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-984000: exit status 83 (82.038583ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-984000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-984000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-984000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-984000 -n multinode-984000: exit status 7 (31.033666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-984000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.23s)

                                                
                                    
x
+
TestPreload (10.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-326000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-326000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.884834s)

                                                
                                                
-- stdout --
	* [test-preload-326000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-326000" primary control-plane node in "test-preload-326000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-326000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:09:52.145103    4731 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:09:52.145238    4731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:52.145241    4731 out.go:358] Setting ErrFile to fd 2...
	I0913 17:09:52.145243    4731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:09:52.145388    4731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:09:52.146457    4731 out.go:352] Setting JSON to false
	I0913 17:09:52.162817    4731 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4156,"bootTime":1726268436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:09:52.162884    4731 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:09:52.170564    4731 out.go:177] * [test-preload-326000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:09:52.178383    4731 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:09:52.178414    4731 notify.go:220] Checking for updates...
	I0913 17:09:52.186362    4731 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:09:52.189298    4731 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:09:52.192367    4731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:09:52.195350    4731 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:09:52.198294    4731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:09:52.201660    4731 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:09:52.201708    4731 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:09:52.205284    4731 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:09:52.212388    4731 start.go:297] selected driver: qemu2
	I0913 17:09:52.212394    4731 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:09:52.212400    4731 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:09:52.214812    4731 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:09:52.217355    4731 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:09:52.218913    4731 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:09:52.218929    4731 cni.go:84] Creating CNI manager for ""
	I0913 17:09:52.218963    4731 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:09:52.218968    4731 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:09:52.218999    4731 start.go:340] cluster config:
	{Name:test-preload-326000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-326000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:09:52.222789    4731 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:52.230413    4731 out.go:177] * Starting "test-preload-326000" primary control-plane node in "test-preload-326000" cluster
	I0913 17:09:52.234237    4731 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0913 17:09:52.234316    4731 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/test-preload-326000/config.json ...
	I0913 17:09:52.234332    4731 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/test-preload-326000/config.json: {Name:mk90d1aea987c8a1132497f3b2698a6083998d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:09:52.234346    4731 cache.go:107] acquiring lock: {Name:mkcefae73ae7b323d0a2cb91a0a61e7dadc9469f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:52.234364    4731 cache.go:107] acquiring lock: {Name:mk106a1f3cab70cacd292e03669510ee12219414 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:52.234359    4731 cache.go:107] acquiring lock: {Name:mk7e05be4b489bd565456cd6f75a733d95257fe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:52.234559    4731 cache.go:107] acquiring lock: {Name:mk5008330cb5466d50368d7f21153fede0b2dc4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:52.234557    4731 cache.go:107] acquiring lock: {Name:mk027dd9bc8ea47d1fd55c6da112e9de166e4ed0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:52.234571    4731 cache.go:107] acquiring lock: {Name:mk7ed19af145231e08e07495ffdce08f46708ef4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:52.234534    4731 cache.go:107] acquiring lock: {Name:mk9b6f9b817e41e9c7631d407a0f718fdb8590d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:52.234602    4731 start.go:360] acquireMachinesLock for test-preload-326000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:09:52.234657    4731 start.go:364] duration metric: took 46.708µs to acquireMachinesLock for "test-preload-326000"
	I0913 17:09:52.234674    4731 start.go:93] Provisioning new machine with config: &{Name:test-preload-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-326000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:09:52.234729    4731 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:09:52.234845    4731 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0913 17:09:52.234859    4731 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0913 17:09:52.234866    4731 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:09:52.234867    4731 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0913 17:09:52.234346    4731 cache.go:107] acquiring lock: {Name:mke26256cfdad2c2dfdb80fdf60149ce214ea396 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:09:52.234905    4731 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:09:52.234849    4731 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 17:09:52.234937    4731 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:09:52.234975    4731 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0913 17:09:52.236633    4731 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:09:52.243963    4731 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0913 17:09:52.244565    4731 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:09:52.246547    4731 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0913 17:09:52.246602    4731 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0913 17:09:52.246617    4731 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0913 17:09:52.246636    4731 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0913 17:09:52.246651    4731 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:09:52.246712    4731 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:09:52.254619    4731 start.go:159] libmachine.API.Create for "test-preload-326000" (driver="qemu2")
	I0913 17:09:52.254641    4731 client.go:168] LocalClient.Create starting
	I0913 17:09:52.254721    4731 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:09:52.254756    4731 main.go:141] libmachine: Decoding PEM data...
	I0913 17:09:52.254766    4731 main.go:141] libmachine: Parsing certificate...
	I0913 17:09:52.254813    4731 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:09:52.254838    4731 main.go:141] libmachine: Decoding PEM data...
	I0913 17:09:52.254847    4731 main.go:141] libmachine: Parsing certificate...
	I0913 17:09:52.255197    4731 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:09:52.413852    4731 main.go:141] libmachine: Creating SSH key...
	I0913 17:09:52.481846    4731 main.go:141] libmachine: Creating Disk image...
	I0913 17:09:52.481864    4731 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:09:52.482037    4731 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2
	I0913 17:09:52.491911    4731 main.go:141] libmachine: STDOUT: 
	I0913 17:09:52.491937    4731 main.go:141] libmachine: STDERR: 
	I0913 17:09:52.492007    4731 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2 +20000M
	I0913 17:09:52.500973    4731 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:09:52.501014    4731 main.go:141] libmachine: STDERR: 
	I0913 17:09:52.501042    4731 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2
	I0913 17:09:52.501046    4731 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:09:52.501061    4731 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:09:52.501099    4731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:8d:1c:90:fe:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2
	I0913 17:09:52.503022    4731 main.go:141] libmachine: STDOUT: 
	I0913 17:09:52.503042    4731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:09:52.503062    4731 client.go:171] duration metric: took 248.417625ms to LocalClient.Create
	I0913 17:09:52.778174    4731 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0913 17:09:52.807554    4731 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0913 17:09:52.816722    4731 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0913 17:09:52.869412    4731 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0913 17:09:52.892593    4731 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0913 17:09:52.908780    4731 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0913 17:09:52.908817    4731 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0913 17:09:52.918550    4731 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0913 17:09:52.918572    4731 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 684.099958ms
	I0913 17:09:52.918592    4731 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0913 17:09:52.922044    4731 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0913 17:09:53.225011    4731 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0913 17:09:53.225100    4731 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 17:09:53.741965    4731 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0913 17:09:53.742014    4731 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.507691708s
	I0913 17:09:53.742054    4731 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0913 17:09:54.488648    4731 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0913 17:09:54.488692    4731 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.254176167s
	I0913 17:09:54.488740    4731 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0913 17:09:54.503196    4731 start.go:128] duration metric: took 2.268483875s to createHost
	I0913 17:09:54.503237    4731 start.go:83] releasing machines lock for "test-preload-326000", held for 2.268601542s
	W0913 17:09:54.503309    4731 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:09:54.514053    4731 out.go:177] * Deleting "test-preload-326000" in qemu2 ...
	W0913 17:09:54.545487    4731 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:09:54.545519    4731 start.go:729] Will try again in 5 seconds ...
	I0913 17:09:55.911789    4731 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0913 17:09:55.911848    4731 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.6773855s
	I0913 17:09:55.911900    4731 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0913 17:09:56.258495    4731 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0913 17:09:56.258550    4731 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.024241333s
	I0913 17:09:56.258573    4731 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0913 17:09:57.209788    4731 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0913 17:09:57.209838    4731 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.975565958s
	I0913 17:09:57.209862    4731 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0913 17:09:59.132152    4731 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0913 17:09:59.132203    4731 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.897950084s
	I0913 17:09:59.132227    4731 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0913 17:09:59.545747    4731 start.go:360] acquireMachinesLock for test-preload-326000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:09:59.546199    4731 start.go:364] duration metric: took 371.167µs to acquireMachinesLock for "test-preload-326000"
	I0913 17:09:59.546324    4731 start.go:93] Provisioning new machine with config: &{Name:test-preload-326000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-326000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:09:59.546540    4731 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:09:59.556108    4731 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:09:59.606619    4731 start.go:159] libmachine.API.Create for "test-preload-326000" (driver="qemu2")
	I0913 17:09:59.606673    4731 client.go:168] LocalClient.Create starting
	I0913 17:09:59.606801    4731 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:09:59.606867    4731 main.go:141] libmachine: Decoding PEM data...
	I0913 17:09:59.606887    4731 main.go:141] libmachine: Parsing certificate...
	I0913 17:09:59.606944    4731 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:09:59.606998    4731 main.go:141] libmachine: Decoding PEM data...
	I0913 17:09:59.607009    4731 main.go:141] libmachine: Parsing certificate...
	I0913 17:09:59.607513    4731 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:09:59.775350    4731 main.go:141] libmachine: Creating SSH key...
	I0913 17:09:59.929257    4731 main.go:141] libmachine: Creating Disk image...
	I0913 17:09:59.929264    4731 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:09:59.929472    4731 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2
	I0913 17:09:59.938895    4731 main.go:141] libmachine: STDOUT: 
	I0913 17:09:59.938917    4731 main.go:141] libmachine: STDERR: 
	I0913 17:09:59.938970    4731 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2 +20000M
	I0913 17:09:59.946936    4731 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:09:59.946950    4731 main.go:141] libmachine: STDERR: 
	I0913 17:09:59.946966    4731 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2
	I0913 17:09:59.946970    4731 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:09:59.946981    4731 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:09:59.947015    4731 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:9c:18:4d:61:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/test-preload-326000/disk.qcow2
	I0913 17:09:59.948710    4731 main.go:141] libmachine: STDOUT: 
	I0913 17:09:59.948732    4731 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:09:59.948750    4731 client.go:171] duration metric: took 342.07675ms to LocalClient.Create
	I0913 17:10:01.948998    4731 start.go:128] duration metric: took 2.402440291s to createHost
	I0913 17:10:01.949072    4731 start.go:83] releasing machines lock for "test-preload-326000", held for 2.402881125s
	W0913 17:10:01.949312    4731 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-326000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-326000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:10:01.965040    4731 out.go:201] 
	W0913 17:10:01.966749    4731 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:10:01.966776    4731 out.go:270] * 
	* 
	W0913 17:10:01.969669    4731 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:10:01.984817    4731 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-326000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-09-13 17:10:02.003467 -0700 PDT m=+2669.505833209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-326000 -n test-preload-326000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-326000 -n test-preload-326000: exit status 7 (66.2055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-326000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-326000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-326000
--- FAIL: TestPreload (10.04s)

                                                
                                    
x
+
TestScheduledStopUnix (9.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-688000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-688000 --memory=2048 --driver=qemu2 : exit status 80 (9.835950875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-688000" primary control-plane node in "scheduled-stop-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-688000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-688000" primary control-plane node in "scheduled-stop-688000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-688000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-688000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-09-13 17:10:11.99028 -0700 PDT m=+2679.492795793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-688000 -n scheduled-stop-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-688000 -n scheduled-stop-688000: exit status 7 (68.674167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-688000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-688000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-688000
--- FAIL: TestScheduledStopUnix (9.99s)

                                                
                                    
x
+
TestSkaffold (13.53s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1244115388 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1244115388 version: (1.0633235s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-589000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-589000 --memory=2600 --driver=qemu2 : exit status 80 (10.205871459s)

                                                
                                                
-- stdout --
	* [skaffold-589000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-589000" primary control-plane node in "skaffold-589000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-589000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-589000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-589000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-589000" primary control-plane node in "skaffold-589000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-589000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-589000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-09-13 17:10:25.531043 -0700 PDT m=+2693.033761751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-589000 -n skaffold-589000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-589000 -n skaffold-589000: exit status 7 (60.823625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-589000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-589000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-589000
--- FAIL: TestSkaffold (13.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (587.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.748624066 start -p running-upgrade-714000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.748624066 start -p running-upgrade-714000 --memory=2200 --vm-driver=qemu2 : (49.943201625s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-714000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0913 17:13:42.446341    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-714000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m21.345492459s)

                                                
                                                
-- stdout --
	* [running-upgrade-714000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-714000" primary control-plane node in "running-upgrade-714000" cluster
	* Updating the running qemu2 "running-upgrade-714000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:11:59.125707    5124 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:11:59.125845    5124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:11:59.125849    5124 out.go:358] Setting ErrFile to fd 2...
	I0913 17:11:59.125851    5124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:11:59.125984    5124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:11:59.127074    5124 out.go:352] Setting JSON to false
	I0913 17:11:59.143998    5124 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4283,"bootTime":1726268436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:11:59.144089    5124 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:11:59.149055    5124 out.go:177] * [running-upgrade-714000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:11:59.155906    5124 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:11:59.155956    5124 notify.go:220] Checking for updates...
	I0913 17:11:59.164073    5124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:11:59.167099    5124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:11:59.170059    5124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:11:59.173100    5124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:11:59.177991    5124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:11:59.184352    5124 config.go:182] Loaded profile config "running-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:11:59.188086    5124 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 17:11:59.191044    5124 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:11:59.192676    5124 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:11:59.200088    5124 start.go:297] selected driver: qemu2
	I0913 17:11:59.200094    5124 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50289 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 17:11:59.200167    5124 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:11:59.202448    5124 cni.go:84] Creating CNI manager for ""
	I0913 17:11:59.202477    5124 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:11:59.202512    5124 start.go:340] cluster config:
	{Name:running-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50289 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 17:11:59.202564    5124 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:11:59.210041    5124 out.go:177] * Starting "running-upgrade-714000" primary control-plane node in "running-upgrade-714000" cluster
	I0913 17:11:59.214091    5124 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 17:11:59.214103    5124 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0913 17:11:59.214110    5124 cache.go:56] Caching tarball of preloaded images
	I0913 17:11:59.214183    5124 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:11:59.214190    5124 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0913 17:11:59.214245    5124 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/config.json ...
	I0913 17:11:59.214715    5124 start.go:360] acquireMachinesLock for running-upgrade-714000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:11:59.214744    5124 start.go:364] duration metric: took 23.209µs to acquireMachinesLock for "running-upgrade-714000"
	I0913 17:11:59.214753    5124 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:11:59.214758    5124 fix.go:54] fixHost starting: 
	I0913 17:11:59.215366    5124 fix.go:112] recreateIfNeeded on running-upgrade-714000: state=Running err=<nil>
	W0913 17:11:59.215375    5124 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:11:59.220025    5124 out.go:177] * Updating the running qemu2 "running-upgrade-714000" VM ...
	I0913 17:11:59.228019    5124 machine.go:93] provisionDockerMachine start ...
	I0913 17:11:59.228078    5124 main.go:141] libmachine: Using SSH client type: native
	I0913 17:11:59.228206    5124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104491190] 0x1044939d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0913 17:11:59.228212    5124 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 17:11:59.283470    5124 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-714000
	
	I0913 17:11:59.283491    5124 buildroot.go:166] provisioning hostname "running-upgrade-714000"
	I0913 17:11:59.283538    5124 main.go:141] libmachine: Using SSH client type: native
	I0913 17:11:59.283661    5124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104491190] 0x1044939d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0913 17:11:59.283666    5124 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-714000 && echo "running-upgrade-714000" | sudo tee /etc/hostname
	I0913 17:11:59.342767    5124 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-714000
	
	I0913 17:11:59.342828    5124 main.go:141] libmachine: Using SSH client type: native
	I0913 17:11:59.342942    5124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104491190] 0x1044939d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0913 17:11:59.342951    5124 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-714000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-714000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-714000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 17:11:59.395168    5124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 17:11:59.395179    5124 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19640-1360/.minikube CaCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19640-1360/.minikube}
	I0913 17:11:59.395192    5124 buildroot.go:174] setting up certificates
	I0913 17:11:59.395199    5124 provision.go:84] configureAuth start
	I0913 17:11:59.395204    5124 provision.go:143] copyHostCerts
	I0913 17:11:59.395266    5124 exec_runner.go:144] found /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem, removing ...
	I0913 17:11:59.395274    5124 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem
	I0913 17:11:59.395393    5124 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem (1679 bytes)
	I0913 17:11:59.395561    5124 exec_runner.go:144] found /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem, removing ...
	I0913 17:11:59.395564    5124 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem
	I0913 17:11:59.395642    5124 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem (1078 bytes)
	I0913 17:11:59.395733    5124 exec_runner.go:144] found /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem, removing ...
	I0913 17:11:59.395736    5124 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem
	I0913 17:11:59.395774    5124 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem (1123 bytes)
	I0913 17:11:59.395892    5124 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-714000 san=[127.0.0.1 localhost minikube running-upgrade-714000]
	I0913 17:11:59.506611    5124 provision.go:177] copyRemoteCerts
	I0913 17:11:59.506661    5124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 17:11:59.506670    5124 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/running-upgrade-714000/id_rsa Username:docker}
	I0913 17:11:59.536439    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 17:11:59.543202    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 17:11:59.551061    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 17:11:59.557736    5124 provision.go:87] duration metric: took 162.526125ms to configureAuth
	I0913 17:11:59.557747    5124 buildroot.go:189] setting minikube options for container-runtime
	I0913 17:11:59.557868    5124 config.go:182] Loaded profile config "running-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:11:59.557903    5124 main.go:141] libmachine: Using SSH client type: native
	I0913 17:11:59.557987    5124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104491190] 0x1044939d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0913 17:11:59.557994    5124 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 17:11:59.611383    5124 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0913 17:11:59.611393    5124 buildroot.go:70] root file system type: tmpfs
	I0913 17:11:59.611445    5124 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 17:11:59.611502    5124 main.go:141] libmachine: Using SSH client type: native
	I0913 17:11:59.611621    5124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104491190] 0x1044939d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0913 17:11:59.611654    5124 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 17:11:59.670759    5124 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 17:11:59.670822    5124 main.go:141] libmachine: Using SSH client type: native
	I0913 17:11:59.670942    5124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104491190] 0x1044939d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0913 17:11:59.670951    5124 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 17:11:59.731645    5124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 17:11:59.731657    5124 machine.go:96] duration metric: took 503.640083ms to provisionDockerMachine
	I0913 17:11:59.731662    5124 start.go:293] postStartSetup for "running-upgrade-714000" (driver="qemu2")
	I0913 17:11:59.731669    5124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 17:11:59.731728    5124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 17:11:59.731737    5124 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/running-upgrade-714000/id_rsa Username:docker}
	I0913 17:11:59.760250    5124 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 17:11:59.761686    5124 info.go:137] Remote host: Buildroot 2021.02.12
	I0913 17:11:59.761694    5124 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19640-1360/.minikube/addons for local assets ...
	I0913 17:11:59.761759    5124 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19640-1360/.minikube/files for local assets ...
	I0913 17:11:59.761848    5124 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem -> 18822.pem in /etc/ssl/certs
	I0913 17:11:59.761947    5124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 17:11:59.765300    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem --> /etc/ssl/certs/18822.pem (1708 bytes)
	I0913 17:11:59.772340    5124 start.go:296] duration metric: took 40.672833ms for postStartSetup
	I0913 17:11:59.772353    5124 fix.go:56] duration metric: took 557.60575ms for fixHost
	I0913 17:11:59.772397    5124 main.go:141] libmachine: Using SSH client type: native
	I0913 17:11:59.772502    5124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104491190] 0x1044939d0 <nil>  [] 0s} localhost 50257 <nil> <nil>}
	I0913 17:11:59.772507    5124 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 17:11:59.825754    5124 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726272720.218426554
	
	I0913 17:11:59.825765    5124 fix.go:216] guest clock: 1726272720.218426554
	I0913 17:11:59.825769    5124 fix.go:229] Guest: 2024-09-13 17:12:00.218426554 -0700 PDT Remote: 2024-09-13 17:11:59.772355 -0700 PDT m=+0.666412376 (delta=446.071554ms)
	I0913 17:11:59.825782    5124 fix.go:200] guest clock delta is within tolerance: 446.071554ms
	I0913 17:11:59.825785    5124 start.go:83] releasing machines lock for "running-upgrade-714000", held for 611.045791ms
	I0913 17:11:59.825859    5124 ssh_runner.go:195] Run: cat /version.json
	I0913 17:11:59.825872    5124 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/running-upgrade-714000/id_rsa Username:docker}
	I0913 17:11:59.825859    5124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 17:11:59.825913    5124 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/running-upgrade-714000/id_rsa Username:docker}
	W0913 17:11:59.826469    5124 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50257: connect: connection refused
	I0913 17:11:59.826482    5124 retry.go:31] will retry after 208.344302ms: dial tcp [::1]:50257: connect: connection refused
	W0913 17:12:00.065341    5124 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0913 17:12:00.065425    5124 ssh_runner.go:195] Run: systemctl --version
	I0913 17:12:00.067418    5124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 17:12:00.069218    5124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 17:12:00.069248    5124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0913 17:12:00.072112    5124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0913 17:12:00.077549    5124 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 17:12:00.077557    5124 start.go:495] detecting cgroup driver to use...
	I0913 17:12:00.077628    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 17:12:00.082926    5124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0913 17:12:00.086278    5124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 17:12:00.089644    5124 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 17:12:00.089671    5124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 17:12:00.092935    5124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 17:12:00.095727    5124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 17:12:00.098605    5124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 17:12:00.102005    5124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 17:12:00.105236    5124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 17:12:00.108142    5124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 17:12:00.110981    5124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 17:12:00.114269    5124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 17:12:00.117490    5124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 17:12:00.119971    5124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:12:00.218212    5124 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 17:12:00.227545    5124 start.go:495] detecting cgroup driver to use...
	I0913 17:12:00.227617    5124 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 17:12:00.234965    5124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 17:12:00.242593    5124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 17:12:00.248621    5124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 17:12:00.253105    5124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 17:12:00.257688    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 17:12:00.262927    5124 ssh_runner.go:195] Run: which cri-dockerd
	I0913 17:12:00.264157    5124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 17:12:00.266776    5124 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0913 17:12:00.271539    5124 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 17:12:00.382538    5124 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 17:12:00.474044    5124 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 17:12:00.474110    5124 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 17:12:00.480567    5124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:12:00.573344    5124 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 17:12:01.920161    5124 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.346820125s)
	I0913 17:12:01.920237    5124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 17:12:01.925184    5124 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0913 17:12:01.931426    5124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 17:12:01.935924    5124 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 17:12:02.032137    5124 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 17:12:02.116475    5124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:12:02.195639    5124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 17:12:02.201593    5124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 17:12:02.206084    5124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:12:02.275562    5124 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 17:12:02.315136    5124 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 17:12:02.315229    5124 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 17:12:02.317099    5124 start.go:563] Will wait 60s for crictl version
	I0913 17:12:02.317139    5124 ssh_runner.go:195] Run: which crictl
	I0913 17:12:02.318439    5124 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 17:12:02.331257    5124 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0913 17:12:02.331341    5124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 17:12:02.343853    5124 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 17:12:02.366569    5124 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0913 17:12:02.366649    5124 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0913 17:12:02.367908    5124 kubeadm.go:883] updating cluster {Name:running-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50289 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0913 17:12:02.367953    5124 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 17:12:02.368000    5124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 17:12:02.378383    5124 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 17:12:02.378392    5124 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 17:12:02.378448    5124 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 17:12:02.381944    5124 ssh_runner.go:195] Run: which lz4
	I0913 17:12:02.383075    5124 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 17:12:02.384332    5124 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 17:12:02.384341    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0913 17:12:03.361365    5124 docker.go:649] duration metric: took 978.339167ms to copy over tarball
	I0913 17:12:03.361430    5124 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 17:12:04.659164    5124 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.297740708s)
	I0913 17:12:04.659177    5124 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 17:12:04.676038    5124 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 17:12:04.679146    5124 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0913 17:12:04.684198    5124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:12:04.763473    5124 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 17:12:04.973858    5124 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 17:12:04.986173    5124 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 17:12:04.986184    5124 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 17:12:04.986189    5124 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 17:12:04.991569    5124 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:12:04.993504    5124 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:12:04.994900    5124 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:12:04.995304    5124 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:12:04.996164    5124 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:12:04.996435    5124 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:12:04.997798    5124 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:12:04.999348    5124 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:12:04.999425    5124 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:12:04.999557    5124 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:12:05.000629    5124 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:12:05.000843    5124 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0913 17:12:05.001801    5124 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:12:05.001898    5124 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:12:05.002784    5124 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0913 17:12:05.003501    5124 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:12:05.445835    5124 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:12:05.458118    5124 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0913 17:12:05.458144    5124 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:12:05.458218    5124 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:12:05.459765    5124 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:12:05.471724    5124 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:12:05.474663    5124 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0913 17:12:05.477823    5124 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0913 17:12:05.482165    5124 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0913 17:12:05.482185    5124 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:12:05.482185    5124 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:12:05.482225    5124 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:12:05.489590    5124 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0913 17:12:05.489614    5124 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:12:05.489678    5124 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:12:05.511381    5124 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0913 17:12:05.511403    5124 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0913 17:12:05.511405    5124 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0913 17:12:05.511465    5124 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0913 17:12:05.511471    5124 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0913 17:12:05.511481    5124 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:12:05.511515    5124 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:12:05.511798    5124 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0913 17:12:05.522578    5124 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0913 17:12:05.522722    5124 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:12:05.526564    5124 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0913 17:12:05.526575    5124 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0913 17:12:05.526688    5124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0913 17:12:05.532495    5124 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0913 17:12:05.534113    5124 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0913 17:12:05.534133    5124 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:12:05.534150    5124 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0913 17:12:05.534165    5124 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:12:05.534166    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0913 17:12:05.545697    5124 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0913 17:12:05.545714    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0913 17:12:05.553891    5124 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0913 17:12:05.553912    5124 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:12:05.553983    5124 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0913 17:12:05.557870    5124 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0913 17:12:05.558016    5124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0913 17:12:05.584205    5124 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0913 17:12:05.584223    5124 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0913 17:12:05.584234    5124 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0913 17:12:05.584249    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0913 17:12:05.629636    5124 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0913 17:12:05.629650    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0913 17:12:05.670800    5124 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0913 17:12:05.773803    5124 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0913 17:12:05.773989    5124 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:12:05.788739    5124 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0913 17:12:05.788769    5124 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:12:05.788849    5124 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:12:06.615712    5124 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 17:12:06.616213    5124 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 17:12:06.621392    5124 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0913 17:12:06.621451    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0913 17:12:06.680163    5124 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 17:12:06.680175    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0913 17:12:06.913402    5124 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 17:12:06.913445    5124 cache_images.go:92] duration metric: took 1.92727675s to LoadCachedImages
	W0913 17:12:06.913481    5124 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0913 17:12:06.913490    5124 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0913 17:12:06.913537    5124 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-714000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 17:12:06.913613    5124 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 17:12:06.935447    5124 cni.go:84] Creating CNI manager for ""
	I0913 17:12:06.935459    5124 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:12:06.935467    5124 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 17:12:06.935479    5124 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-714000 NodeName:running-upgrade-714000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 17:12:06.935547    5124 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-714000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 17:12:06.935615    5124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0913 17:12:06.938597    5124 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 17:12:06.938633    5124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 17:12:06.941855    5124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0913 17:12:06.946841    5124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 17:12:06.951926    5124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0913 17:12:06.957068    5124 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0913 17:12:06.958513    5124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:12:07.039215    5124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 17:12:07.044595    5124 certs.go:68] Setting up /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000 for IP: 10.0.2.15
	I0913 17:12:07.044601    5124 certs.go:194] generating shared ca certs ...
	I0913 17:12:07.044609    5124 certs.go:226] acquiring lock for ca certs: {Name:mka1fd556c9b3f29c4a4f622bab1c9ab3ca42c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:12:07.044751    5124 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key
	I0913 17:12:07.044797    5124 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key
	I0913 17:12:07.044805    5124 certs.go:256] generating profile certs ...
	I0913 17:12:07.044880    5124 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/client.key
	I0913 17:12:07.044898    5124 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.key.4ce4803f
	I0913 17:12:07.044910    5124 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.crt.4ce4803f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0913 17:12:07.163935    5124 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.crt.4ce4803f ...
	I0913 17:12:07.163940    5124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.crt.4ce4803f: {Name:mk2bd44cd826f108191ff3e877331f30f4b6ee50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:12:07.167331    5124 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.key.4ce4803f ...
	I0913 17:12:07.167342    5124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.key.4ce4803f: {Name:mk973344e9507ffc80e2d79f63653d59a606dbe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:12:07.167538    5124 certs.go:381] copying /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.crt.4ce4803f -> /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.crt
	I0913 17:12:07.167683    5124 certs.go:385] copying /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.key.4ce4803f -> /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.key
	I0913 17:12:07.167810    5124 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/proxy-client.key
	I0913 17:12:07.167934    5124 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/1882.pem (1338 bytes)
	W0913 17:12:07.167955    5124 certs.go:480] ignoring /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/1882_empty.pem, impossibly tiny 0 bytes
	I0913 17:12:07.167960    5124 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 17:12:07.167979    5124 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem (1078 bytes)
	I0913 17:12:07.167997    5124 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem (1123 bytes)
	I0913 17:12:07.168015    5124 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem (1679 bytes)
	I0913 17:12:07.168053    5124 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem (1708 bytes)
	I0913 17:12:07.168372    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 17:12:07.175775    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 17:12:07.182519    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 17:12:07.189902    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 17:12:07.197135    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 17:12:07.203768    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 17:12:07.210341    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 17:12:07.217825    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 17:12:07.225918    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/1882.pem --> /usr/share/ca-certificates/1882.pem (1338 bytes)
	I0913 17:12:07.233047    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem --> /usr/share/ca-certificates/18822.pem (1708 bytes)
	I0913 17:12:07.239362    5124 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 17:12:07.246117    5124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 17:12:07.251875    5124 ssh_runner.go:195] Run: openssl version
	I0913 17:12:07.253804    5124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 17:12:07.257213    5124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 17:12:07.258640    5124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0913 17:12:07.258665    5124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 17:12:07.260497    5124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 17:12:07.263150    5124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1882.pem && ln -fs /usr/share/ca-certificates/1882.pem /etc/ssl/certs/1882.pem"
	I0913 17:12:07.266625    5124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1882.pem
	I0913 17:12:07.268170    5124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:41 /usr/share/ca-certificates/1882.pem
	I0913 17:12:07.268196    5124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1882.pem
	I0913 17:12:07.270384    5124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1882.pem /etc/ssl/certs/51391683.0"
	I0913 17:12:07.273534    5124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18822.pem && ln -fs /usr/share/ca-certificates/18822.pem /etc/ssl/certs/18822.pem"
	I0913 17:12:07.276564    5124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18822.pem
	I0913 17:12:07.277923    5124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:41 /usr/share/ca-certificates/18822.pem
	I0913 17:12:07.277946    5124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18822.pem
	I0913 17:12:07.279758    5124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18822.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 17:12:07.282591    5124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 17:12:07.284347    5124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 17:12:07.286102    5124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 17:12:07.287848    5124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 17:12:07.289553    5124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 17:12:07.291442    5124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 17:12:07.293098    5124 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 17:12:07.294902    5124 kubeadm.go:392] StartCluster: {Name:running-upgrade-714000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50289 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-714000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 17:12:07.294977    5124 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 17:12:07.304926    5124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 17:12:07.308683    5124 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 17:12:07.308708    5124 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 17:12:07.308737    5124 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 17:12:07.311438    5124 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 17:12:07.311687    5124 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-714000" does not appear in /Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:12:07.311739    5124 kubeconfig.go:62] /Users/jenkins/minikube-integration/19640-1360/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-714000" cluster setting kubeconfig missing "running-upgrade-714000" context setting]
	I0913 17:12:07.311868    5124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/kubeconfig: {Name:mke2b016812cedc34ffbfc79dbc5c22d8c43c377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:12:07.312541    5124 kapi.go:59] client config for running-upgrade-714000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/client.key", CAFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a69800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 17:12:07.312869    5124 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 17:12:07.315732    5124 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-714000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0913 17:12:07.315741    5124 kubeadm.go:1160] stopping kube-system containers ...
	I0913 17:12:07.315790    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 17:12:07.327222    5124 docker.go:483] Stopping containers: [b5a2a7bff3d9 bd33dd4c82d4 1d68fc4833c0 ebfad5ea78f0 493f2e44acf9 428c1a8c245e 8f162a672acf 78f0daf8c554 0e30a1139ac3 f762385dc068]
	I0913 17:12:07.327302    5124 ssh_runner.go:195] Run: docker stop b5a2a7bff3d9 bd33dd4c82d4 1d68fc4833c0 ebfad5ea78f0 493f2e44acf9 428c1a8c245e 8f162a672acf 78f0daf8c554 0e30a1139ac3 f762385dc068
	I0913 17:12:07.338588    5124 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 17:12:07.416725    5124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 17:12:07.420618    5124 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Sep 14 00:11 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Sep 14 00:11 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Sep 14 00:11 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 14 00:11 /etc/kubernetes/scheduler.conf
	
	I0913 17:12:07.420659    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/admin.conf
	I0913 17:12:07.423855    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0913 17:12:07.423882    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 17:12:07.426791    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/kubelet.conf
	I0913 17:12:07.429871    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0913 17:12:07.429898    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 17:12:07.433032    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/controller-manager.conf
	I0913 17:12:07.435884    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0913 17:12:07.435911    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 17:12:07.438432    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/scheduler.conf
	I0913 17:12:07.441559    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0913 17:12:07.441585    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 17:12:07.444665    5124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 17:12:07.447365    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:12:07.467695    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:12:07.820113    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:12:08.019611    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:12:08.042888    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:12:08.085564    5124 api_server.go:52] waiting for apiserver process to appear ...
	I0913 17:12:08.085653    5124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:12:08.588017    5124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:12:09.087752    5124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:12:09.587696    5124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:12:09.591822    5124 api_server.go:72] duration metric: took 1.506280792s to wait for apiserver process to appear ...
	I0913 17:12:09.591831    5124 api_server.go:88] waiting for apiserver healthz status ...
	I0913 17:12:09.591841    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:14.593982    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:14.594115    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:19.595104    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:19.595209    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:24.596336    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:24.596364    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:29.597302    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:29.597383    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:34.598963    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:34.599051    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:39.601249    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:39.601329    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:44.604009    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:44.604104    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:49.604942    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:49.605017    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:54.606631    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:54.606739    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:12:59.608276    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:12:59.608331    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:13:04.610726    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:13:04.610804    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:13:09.613430    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:13:09.614094    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:13:09.654013    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:13:09.654197    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:13:09.676229    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:13:09.676348    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:13:09.691771    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:13:09.691860    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:13:09.703795    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:13:09.703878    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:13:09.714837    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:13:09.714913    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:13:09.725775    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:13:09.725857    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:13:09.739721    5124 logs.go:276] 0 containers: []
	W0913 17:13:09.739733    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:13:09.739803    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:13:09.751078    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:13:09.751097    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:13:09.751103    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:13:09.763032    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:13:09.763048    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:13:09.789072    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:13:09.789079    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:13:09.800510    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:13:09.800522    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:13:09.815920    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:13:09.815931    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:13:09.837577    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:13:09.837588    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:13:09.853459    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:13:09.853470    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:13:09.858208    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:13:09.858215    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:13:09.873937    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:13:09.873948    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:13:09.889063    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:13:09.889074    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:13:09.900509    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:13:09.900521    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:13:09.914228    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:13:09.914240    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:13:09.982749    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:13:09.982763    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:13:09.998131    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:13:09.998145    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:13:10.009930    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:13:10.009944    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:13:10.027023    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:13:10.027031    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:13:10.038441    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:13:10.038452    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:13:12.575241    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:13:17.577660    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:13:17.578153    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:13:17.608040    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:13:17.608194    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:13:17.626237    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:13:17.626337    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:13:17.639753    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:13:17.639831    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:13:17.651879    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:13:17.651963    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:13:17.662448    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:13:17.662522    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:13:17.672826    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:13:17.672927    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:13:17.682708    5124 logs.go:276] 0 containers: []
	W0913 17:13:17.682721    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:13:17.682786    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:13:17.693181    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:13:17.693197    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:13:17.693202    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:13:17.713568    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:13:17.713578    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:13:17.731812    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:13:17.731827    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:13:17.757851    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:13:17.757859    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:13:17.769408    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:13:17.769421    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:13:17.783107    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:13:17.783122    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:13:17.794320    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:13:17.794330    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:13:17.806089    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:13:17.806101    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:13:17.811998    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:13:17.812006    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:13:17.847475    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:13:17.847487    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:13:17.858498    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:13:17.858508    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:13:17.869545    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:13:17.869555    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:13:17.885744    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:13:17.885755    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:13:17.901120    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:13:17.901137    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:13:17.921932    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:13:17.921940    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:13:17.955843    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:13:17.955853    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:13:17.969396    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:13:17.969407    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:13:20.488378    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:13:25.490889    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:13:25.491347    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:13:25.520345    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:13:25.520492    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:13:25.541344    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:13:25.541467    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:13:25.554442    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:13:25.554514    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:13:25.565796    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:13:25.565871    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:13:25.576265    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:13:25.576354    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:13:25.586742    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:13:25.586823    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:13:25.599793    5124 logs.go:276] 0 containers: []
	W0913 17:13:25.599808    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:13:25.599882    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:13:25.610405    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:13:25.610422    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:13:25.610428    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:13:25.623845    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:13:25.623859    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:13:25.644683    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:13:25.644693    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:13:25.660445    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:13:25.660458    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:13:25.675744    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:13:25.675756    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:13:25.687261    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:13:25.687270    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:13:25.721241    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:13:25.721251    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:13:25.757298    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:13:25.757310    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:13:25.771388    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:13:25.771401    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:13:25.782706    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:13:25.782717    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:13:25.793863    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:13:25.793874    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:13:25.808393    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:13:25.808403    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:13:25.820584    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:13:25.820594    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:13:25.834814    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:13:25.834825    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:13:25.846816    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:13:25.846830    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:13:25.851006    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:13:25.851012    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:13:25.871106    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:13:25.871116    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:13:28.397631    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:13:33.400323    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:13:33.400632    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:13:33.428437    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:13:33.428575    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:13:33.446510    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:13:33.446609    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:13:33.458885    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:13:33.458976    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:13:33.470296    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:13:33.470369    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:13:33.482586    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:13:33.482671    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:13:33.492641    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:13:33.492720    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:13:33.502997    5124 logs.go:276] 0 containers: []
	W0913 17:13:33.503007    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:13:33.503069    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:13:33.513672    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:13:33.513687    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:13:33.513692    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:13:33.517907    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:13:33.517916    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:13:33.533952    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:13:33.533963    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:13:33.548006    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:13:33.548018    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:13:33.562402    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:13:33.562410    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:13:33.576575    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:13:33.576585    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:13:33.590115    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:13:33.590129    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:13:33.606949    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:13:33.606957    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:13:33.617788    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:13:33.617799    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:13:33.629329    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:13:33.629344    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:13:33.643280    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:13:33.643293    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:13:33.654927    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:13:33.654939    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:13:33.690796    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:13:33.690803    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:13:33.724019    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:13:33.724031    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:13:33.744040    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:13:33.744052    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:13:33.763948    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:13:33.763959    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:13:33.775289    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:13:33.775301    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:13:36.301536    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:13:41.304359    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:13:41.304950    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:13:41.342932    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:13:41.343090    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:13:41.363410    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:13:41.363539    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:13:41.378150    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:13:41.378229    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:13:41.390736    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:13:41.390835    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:13:41.401895    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:13:41.401974    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:13:41.412040    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:13:41.412129    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:13:41.422004    5124 logs.go:276] 0 containers: []
	W0913 17:13:41.422014    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:13:41.422076    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:13:41.432582    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:13:41.432601    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:13:41.432606    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:13:41.446328    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:13:41.446338    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:13:41.460021    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:13:41.460031    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:13:41.474846    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:13:41.474856    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:13:41.498796    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:13:41.498806    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:13:41.533096    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:13:41.533108    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:13:41.554952    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:13:41.554963    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:13:41.569915    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:13:41.569926    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:13:41.583946    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:13:41.583958    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:13:41.594896    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:13:41.594910    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:13:41.599058    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:13:41.599064    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:13:41.610073    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:13:41.610084    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:13:41.629041    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:13:41.629051    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:13:41.665365    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:13:41.665377    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:13:41.681467    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:13:41.681478    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:13:41.693015    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:13:41.693028    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:13:41.704495    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:13:41.704504    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:13:44.218652    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:13:49.222339    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:13:49.222900    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:13:49.261400    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:13:49.261567    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:13:49.284633    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:13:49.284767    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:13:49.299393    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:13:49.299469    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:13:49.312034    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:13:49.312122    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:13:49.322222    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:13:49.322308    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:13:49.332562    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:13:49.332646    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:13:49.342614    5124 logs.go:276] 0 containers: []
	W0913 17:13:49.342627    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:13:49.342688    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:13:49.363315    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:13:49.363334    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:13:49.363340    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:13:49.378983    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:13:49.378994    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:13:49.396545    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:13:49.396558    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:13:49.407984    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:13:49.407993    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:13:49.444578    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:13:49.444588    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:13:49.449242    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:13:49.449250    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:13:49.463027    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:13:49.463041    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:13:49.477911    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:13:49.477923    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:13:49.489579    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:13:49.489591    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:13:49.503475    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:13:49.503485    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:13:49.527991    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:13:49.528003    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:13:49.539262    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:13:49.539274    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:13:49.573073    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:13:49.573084    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:13:49.588538    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:13:49.588551    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:13:49.599773    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:13:49.599785    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:13:49.611454    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:13:49.611468    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:13:49.626853    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:13:49.626863    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:13:52.152845    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:13:57.155135    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:13:57.155752    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:13:57.194953    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:13:57.195122    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:13:57.215846    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:13:57.215954    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:13:57.230423    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:13:57.230500    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:13:57.242776    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:13:57.242845    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:13:57.255213    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:13:57.255300    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:13:57.270703    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:13:57.270774    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:13:57.280619    5124 logs.go:276] 0 containers: []
	W0913 17:13:57.280632    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:13:57.280708    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:13:57.291141    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:13:57.291161    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:13:57.291168    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:13:57.330291    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:13:57.330305    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:13:57.341459    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:13:57.341472    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:13:57.352840    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:13:57.352850    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:13:57.364873    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:13:57.364883    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:13:57.384646    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:13:57.384657    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:13:57.399444    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:13:57.399454    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:13:57.416461    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:13:57.416473    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:13:57.430179    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:13:57.430189    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:13:57.454143    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:13:57.454152    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:13:57.488319    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:13:57.488328    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:13:57.492657    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:13:57.492666    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:13:57.506724    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:13:57.506737    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:13:57.526744    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:13:57.526753    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:13:57.543777    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:13:57.543791    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:13:57.568945    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:13:57.568955    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:13:57.582076    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:13:57.582092    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:14:00.094161    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:14:05.094832    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:14:05.095301    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:14:05.127343    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:14:05.127488    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:14:05.146220    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:14:05.146327    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:14:05.160039    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:14:05.160129    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:14:05.171920    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:14:05.171991    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:14:05.185909    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:14:05.185986    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:14:05.196452    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:14:05.196522    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:14:05.206134    5124 logs.go:276] 0 containers: []
	W0913 17:14:05.206152    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:14:05.206224    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:14:05.217283    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:14:05.217307    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:14:05.217313    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:14:05.241546    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:14:05.241557    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:14:05.252987    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:14:05.252998    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:14:05.265712    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:14:05.265724    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:14:05.282913    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:14:05.282927    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:14:05.306275    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:14:05.306281    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:14:05.310790    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:14:05.310797    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:14:05.347271    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:14:05.347284    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:14:05.361571    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:14:05.361583    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:14:05.375440    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:14:05.375452    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:14:05.389412    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:14:05.389420    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:14:05.401104    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:14:05.401114    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:14:05.416225    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:14:05.416237    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:14:05.427871    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:14:05.427880    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:14:05.449304    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:14:05.449313    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:14:05.463734    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:14:05.463747    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:14:05.500076    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:14:05.500088    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:14:08.013246    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:14:13.015391    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:14:13.015652    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:14:13.042969    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:14:13.043117    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:14:13.059899    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:14:13.059992    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:14:13.073099    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:14:13.073184    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:14:13.084619    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:14:13.084703    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:14:13.094899    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:14:13.094974    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:14:13.105347    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:14:13.105424    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:14:13.115413    5124 logs.go:276] 0 containers: []
	W0913 17:14:13.115423    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:14:13.115480    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:14:13.125725    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:14:13.125740    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:14:13.125746    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:14:13.138306    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:14:13.138321    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:14:13.150509    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:14:13.150521    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:14:13.171381    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:14:13.171393    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:14:13.195912    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:14:13.195921    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:14:13.200054    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:14:13.200064    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:14:13.234107    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:14:13.234120    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:14:13.254018    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:14:13.254029    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:14:13.267218    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:14:13.267232    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:14:13.284148    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:14:13.284159    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:14:13.298239    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:14:13.298252    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:14:13.310133    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:14:13.310145    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:14:13.325042    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:14:13.325050    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:14:13.343338    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:14:13.343351    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:14:13.358129    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:14:13.358140    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:14:13.394718    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:14:13.394726    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:14:13.409899    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:14:13.409914    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:14:15.923498    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:14:20.925903    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:14:20.926442    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:14:20.964811    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:14:20.964971    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:14:20.986138    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:14:20.986252    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:14:21.001820    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:14:21.001919    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:14:21.018633    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:14:21.018716    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:14:21.029267    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:14:21.029341    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:14:21.039695    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:14:21.039761    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:14:21.052657    5124 logs.go:276] 0 containers: []
	W0913 17:14:21.052669    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:14:21.052739    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:14:21.063323    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:14:21.063339    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:14:21.063344    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:14:21.083795    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:14:21.083806    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:14:21.100748    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:14:21.100757    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:14:21.117875    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:14:21.117887    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:14:21.133004    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:14:21.133014    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:14:21.170534    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:14:21.170544    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:14:21.193965    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:14:21.193973    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:14:21.228583    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:14:21.228590    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:14:21.242106    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:14:21.242117    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:14:21.253403    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:14:21.253413    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:14:21.265124    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:14:21.265133    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:14:21.277280    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:14:21.277292    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:14:21.281988    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:14:21.281998    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:14:21.299055    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:14:21.299064    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:14:21.313936    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:14:21.313946    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:14:21.324947    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:14:21.324959    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:14:21.336174    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:14:21.336187    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:14:23.852829    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:14:28.855515    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:14:28.855586    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:14:28.866948    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:14:28.867025    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:14:28.878004    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:14:28.878082    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:14:28.890270    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:14:28.890349    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:14:28.902946    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:14:28.903053    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:14:28.915058    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:14:28.915140    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:14:28.927656    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:14:28.927745    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:14:28.940235    5124 logs.go:276] 0 containers: []
	W0913 17:14:28.940249    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:14:28.940316    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:14:28.952839    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:14:28.952860    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:14:28.952867    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:14:28.989691    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:14:28.989715    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:14:29.008789    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:14:29.008805    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:14:29.023057    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:14:29.023072    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:14:29.041203    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:14:29.041216    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:14:29.056242    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:14:29.056253    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:14:29.068557    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:14:29.068572    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:14:29.084936    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:14:29.084946    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:14:29.110792    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:14:29.110801    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:14:29.115289    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:14:29.115296    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:14:29.158021    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:14:29.158032    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:14:29.174329    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:14:29.174341    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:14:29.187442    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:14:29.187453    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:14:29.209216    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:14:29.209228    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:14:29.223699    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:14:29.223710    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:14:29.242907    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:14:29.242924    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:14:29.255598    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:14:29.255609    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:14:31.770601    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:14:36.773116    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:14:36.773313    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:14:36.785341    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:14:36.785433    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:14:36.796475    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:14:36.796555    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:14:36.807363    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:14:36.807450    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:14:36.824517    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:14:36.824602    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:14:36.837416    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:14:36.837491    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:14:36.848295    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:14:36.848364    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:14:36.858739    5124 logs.go:276] 0 containers: []
	W0913 17:14:36.858751    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:14:36.858813    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:14:36.869428    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:14:36.869448    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:14:36.869453    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:14:36.884262    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:14:36.884270    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:14:36.919271    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:14:36.919283    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:14:36.938857    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:14:36.938869    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:14:36.957604    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:14:36.957614    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:14:36.969110    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:14:36.969122    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:14:36.985269    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:14:36.985279    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:14:36.999366    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:14:36.999376    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:14:37.015663    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:14:37.015674    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:14:37.032620    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:14:37.032631    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:14:37.046465    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:14:37.046474    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:14:37.062558    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:14:37.062568    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:14:37.081949    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:14:37.081959    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:14:37.094067    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:14:37.094077    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:14:37.130179    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:14:37.130188    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:14:37.134905    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:14:37.134911    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:14:37.147443    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:14:37.147453    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:14:39.674559    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:14:44.676826    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:14:44.677454    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:14:44.718388    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:14:44.718552    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:14:44.741045    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:14:44.741180    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:14:44.756294    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:14:44.756381    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:14:44.769644    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:14:44.769735    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:14:44.785281    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:14:44.785365    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:14:44.796229    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:14:44.796312    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:14:44.815828    5124 logs.go:276] 0 containers: []
	W0913 17:14:44.815838    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:14:44.815902    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:14:44.829928    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:14:44.829947    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:14:44.829952    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:14:44.834884    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:14:44.834895    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:14:44.856295    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:14:44.856306    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:14:44.874436    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:14:44.874453    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:14:44.897922    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:14:44.897930    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:14:44.909581    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:14:44.909594    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:14:44.945733    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:14:44.945744    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:14:44.960132    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:14:44.960146    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:14:44.978454    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:14:44.978467    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:14:44.990596    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:14:44.990611    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:14:45.026211    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:14:45.026224    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:14:45.040129    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:14:45.040140    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:14:45.054513    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:14:45.054521    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:14:45.066126    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:14:45.066136    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:14:45.077506    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:14:45.077519    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:14:45.093138    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:14:45.093146    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:14:45.109149    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:14:45.109162    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:14:47.627980    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:14:52.629572    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:14:52.629783    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:14:52.645726    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:14:52.645824    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:14:52.664378    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:14:52.664464    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:14:52.675423    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:14:52.675506    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:14:52.689475    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:14:52.689558    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:14:52.700070    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:14:52.700148    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:14:52.710478    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:14:52.710560    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:14:52.720403    5124 logs.go:276] 0 containers: []
	W0913 17:14:52.720417    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:14:52.720493    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:14:52.730591    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:14:52.730609    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:14:52.730615    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:14:52.750849    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:14:52.750859    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:14:52.771373    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:14:52.771385    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:14:52.782698    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:14:52.782707    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:14:52.794348    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:14:52.794358    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:14:52.811663    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:14:52.811675    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:14:52.825211    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:14:52.825225    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:14:52.848880    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:14:52.848889    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:14:52.860330    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:14:52.860339    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:14:52.895218    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:14:52.895226    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:14:52.928709    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:14:52.928721    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:14:52.948032    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:14:52.948042    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:14:52.965039    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:14:52.965054    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:14:52.976316    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:14:52.976327    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:14:52.980858    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:14:52.980864    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:14:52.994699    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:14:52.994710    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:14:53.008459    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:14:53.008468    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:14:55.527663    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:00.530126    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:00.530468    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:00.555859    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:00.556006    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:00.573213    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:00.573318    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:00.586272    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:00.586360    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:00.597576    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:00.597657    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:00.607950    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:00.608033    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:00.618567    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:00.618653    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:00.628500    5124 logs.go:276] 0 containers: []
	W0913 17:15:00.628511    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:00.628576    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:00.642778    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:00.642797    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:00.642803    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:00.663809    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:00.663823    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:00.675665    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:00.675676    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:00.686775    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:00.686786    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:00.701372    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:00.701385    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:00.719334    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:00.719347    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:00.739435    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:00.739450    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:00.773963    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:00.773975    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:00.788076    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:00.788090    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:00.803187    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:00.803198    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:00.821091    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:00.821102    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:00.834961    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:00.834973    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:00.847224    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:00.847235    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:00.851466    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:00.851472    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:00.867869    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:00.867880    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:00.880102    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:00.880112    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:00.904655    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:00.904664    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:03.440366    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:08.441413    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:08.441937    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:08.453960    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:08.454051    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:08.465442    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:08.465540    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:08.476791    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:08.476866    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:08.492325    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:08.492412    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:08.503151    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:08.503235    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:08.513905    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:08.513987    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:08.524383    5124 logs.go:276] 0 containers: []
	W0913 17:15:08.524394    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:08.524462    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:08.539456    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:08.539475    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:08.539480    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:08.560771    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:08.560783    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:08.572017    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:08.572027    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:08.607867    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:08.607879    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:08.612120    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:08.612127    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:08.625688    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:08.625698    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:08.642333    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:08.642349    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:08.658166    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:08.658179    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:08.669472    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:08.669484    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:08.692637    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:08.692648    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:08.727666    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:08.727679    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:08.745737    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:08.745747    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:08.757298    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:08.757311    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:08.772568    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:08.772580    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:08.787751    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:08.787767    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:08.802218    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:08.802229    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:08.823141    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:08.823151    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:11.340996    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:16.343406    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:16.344012    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:16.389474    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:16.389644    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:16.413531    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:16.413663    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:16.429310    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:16.429393    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:16.441456    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:16.441543    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:16.453121    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:16.453206    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:16.463591    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:16.463662    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:16.474155    5124 logs.go:276] 0 containers: []
	W0913 17:15:16.474167    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:16.474240    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:16.484828    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:16.484846    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:16.484853    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:16.499249    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:16.499262    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:16.521055    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:16.521069    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:16.535563    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:16.535577    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:16.547226    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:16.547241    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:16.558882    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:16.558893    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:16.578306    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:16.578318    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:16.593034    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:16.593047    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:16.604116    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:16.604127    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:16.620363    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:16.620376    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:16.624710    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:16.624719    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:16.659933    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:16.659944    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:16.671676    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:16.671691    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:16.695321    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:16.695329    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:16.731026    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:16.731033    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:16.751677    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:16.751686    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:16.770873    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:16.770885    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:19.284438    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:24.286588    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:24.286847    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:24.307175    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:24.307310    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:24.321287    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:24.321381    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:24.333257    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:24.333344    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:24.343971    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:24.344057    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:24.354392    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:24.354469    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:24.369522    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:24.369598    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:24.380056    5124 logs.go:276] 0 containers: []
	W0913 17:15:24.380068    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:24.380140    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:24.390271    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:24.390290    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:24.390295    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:24.424801    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:24.424810    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:24.438108    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:24.438121    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:24.459364    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:24.459376    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:24.470157    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:24.470168    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:24.474673    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:24.474682    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:24.491508    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:24.491519    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:24.517993    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:24.518007    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:24.529982    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:24.529996    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:24.543208    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:24.543218    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:24.558803    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:24.558815    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:24.575767    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:24.575778    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:24.592152    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:24.592165    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:24.625680    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:24.625693    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:24.647126    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:24.647136    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:24.658607    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:24.658619    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:24.673518    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:24.673530    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:27.199904    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:32.202179    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:32.202361    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:32.222427    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:32.222528    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:32.236302    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:32.236390    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:32.247282    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:32.247364    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:32.257895    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:32.257978    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:32.268989    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:32.269078    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:32.280377    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:32.280460    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:32.309177    5124 logs.go:276] 0 containers: []
	W0913 17:15:32.309217    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:32.309301    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:32.326608    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:32.326628    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:32.326634    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:32.349733    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:32.349744    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:32.363410    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:32.363422    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:32.378509    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:32.378525    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:32.390764    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:32.390777    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:32.403138    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:32.403152    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:32.442920    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:32.442932    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:32.461026    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:32.461039    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:32.478680    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:32.478697    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:32.490606    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:32.490617    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:32.527948    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:32.527962    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:32.542367    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:32.542377    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:32.557408    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:32.557418    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:32.581408    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:32.581418    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:32.586206    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:32.586215    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:32.606398    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:32.606409    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:32.621922    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:32.621936    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:35.135836    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:40.138017    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:40.138175    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:40.149575    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:40.149654    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:40.160542    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:40.160626    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:40.171185    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:40.171256    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:40.181898    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:40.181984    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:40.192576    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:40.192664    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:40.203988    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:40.204069    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:40.214167    5124 logs.go:276] 0 containers: []
	W0913 17:15:40.214178    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:40.214250    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:40.225000    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:40.225019    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:40.225025    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:40.242363    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:40.242376    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:40.277678    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:40.277693    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:40.282707    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:40.282715    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:40.297524    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:40.297540    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:40.315542    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:40.315559    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:40.334470    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:40.334487    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:40.360074    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:40.360089    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:40.375030    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:40.375042    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:40.398677    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:40.398695    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:40.411402    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:40.411413    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:40.427748    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:40.427760    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:40.443151    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:40.443170    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:40.460079    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:40.460090    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:40.472603    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:40.472615    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:40.510803    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:40.510817    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:40.526400    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:40.526411    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:43.039281    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:48.041404    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:48.041506    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:48.052903    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:48.052988    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:48.065052    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:48.065136    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:48.077891    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:48.077976    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:48.090520    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:48.090598    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:48.102338    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:48.102418    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:48.114066    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:48.114154    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:48.129265    5124 logs.go:276] 0 containers: []
	W0913 17:15:48.129277    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:48.129351    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:48.141360    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:48.141386    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:48.141392    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:48.157862    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:48.157876    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:48.175036    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:48.175051    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:48.189800    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:48.189811    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:48.225038    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:48.225050    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:48.243178    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:48.243189    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:48.256943    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:48.256955    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:48.272290    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:48.272301    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:48.310941    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:48.310955    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:48.330385    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:48.330394    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:48.348762    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:48.348779    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:48.373609    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:48.373629    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:48.386392    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:48.386403    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:48.398262    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:48.398273    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:48.419711    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:48.419727    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:48.435404    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:48.435420    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:48.447352    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:48.447365    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:50.952511    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:55.954622    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:55.954717    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:55.965383    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:55.965468    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:55.977021    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:55.977102    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:55.988327    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:55.988415    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:55.998704    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:55.998786    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:56.009610    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:56.009692    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:56.020480    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:56.020566    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:56.031373    5124 logs.go:276] 0 containers: []
	W0913 17:15:56.031384    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:56.031445    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:56.041989    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:56.042009    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:56.042027    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:56.055735    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:56.055745    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:56.070332    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:56.070343    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:56.084255    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:56.084269    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:56.119192    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:56.119203    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:56.157493    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:56.157504    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:56.170588    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:56.170599    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:56.188306    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:56.188318    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:56.210722    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:56.210734    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:56.222405    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:56.222416    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:56.227023    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:56.227030    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:56.246467    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:56.246481    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:56.262945    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:56.262956    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:56.275102    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:56.275113    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:56.296338    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:56.296349    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:56.307358    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:56.307369    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:56.323332    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:56.323343    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:58.836817    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:03.838936    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:03.839052    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:16:03.850462    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:16:03.850558    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:16:03.860772    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:16:03.860865    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:16:03.871106    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:16:03.871189    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:16:03.881892    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:16:03.881979    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:16:03.894649    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:16:03.894734    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:16:03.905181    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:16:03.905282    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:16:03.917072    5124 logs.go:276] 0 containers: []
	W0913 17:16:03.917085    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:16:03.917158    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:16:03.927520    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:16:03.927538    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:16:03.927544    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:16:03.953605    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:16:03.953615    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:16:03.964957    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:16:03.964968    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:16:03.976261    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:16:03.976278    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:16:04.011031    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:16:04.011039    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:16:04.015218    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:16:04.015226    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:16:04.032249    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:16:04.032260    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:16:04.046220    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:16:04.046232    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:16:04.081084    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:16:04.081094    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:16:04.095830    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:16:04.095841    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:16:04.110420    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:16:04.110430    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:16:04.125197    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:16:04.125209    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:16:04.137072    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:16:04.137081    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:16:04.151833    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:16:04.151844    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:16:04.175070    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:16:04.175081    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:16:04.188964    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:16:04.188974    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:16:04.206209    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:16:04.206219    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:16:06.720047    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:11.722404    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:11.722597    5124 kubeadm.go:597] duration metric: took 4m4.417538166s to restartPrimaryControlPlane
	W0913 17:16:11.722743    5124 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 17:16:11.722807    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0913 17:16:12.756270    5124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.03346425s)
	I0913 17:16:12.756348    5124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 17:16:12.761468    5124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 17:16:12.764262    5124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 17:16:12.766969    5124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 17:16:12.766976    5124 kubeadm.go:157] found existing configuration files:
	
	I0913 17:16:12.766999    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/admin.conf
	I0913 17:16:12.770109    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 17:16:12.770132    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 17:16:12.773417    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/kubelet.conf
	I0913 17:16:12.776440    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 17:16:12.776470    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 17:16:12.778919    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/controller-manager.conf
	I0913 17:16:12.781931    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 17:16:12.781962    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 17:16:12.784893    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/scheduler.conf
	I0913 17:16:12.787213    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 17:16:12.787239    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 17:16:12.790047    5124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 17:16:12.808432    5124 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0913 17:16:12.808477    5124 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 17:16:12.856279    5124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 17:16:12.856343    5124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 17:16:12.856398    5124 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 17:16:12.911617    5124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 17:16:12.916760    5124 out.go:235]   - Generating certificates and keys ...
	I0913 17:16:12.916793    5124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 17:16:12.916819    5124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 17:16:12.916850    5124 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 17:16:12.916879    5124 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 17:16:12.916919    5124 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 17:16:12.916947    5124 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 17:16:12.916978    5124 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 17:16:12.917011    5124 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 17:16:12.917048    5124 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 17:16:12.917087    5124 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 17:16:12.917107    5124 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 17:16:12.917131    5124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 17:16:13.108077    5124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 17:16:13.192549    5124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 17:16:13.475502    5124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 17:16:13.511418    5124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 17:16:13.546548    5124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 17:16:13.547575    5124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 17:16:13.547633    5124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 17:16:13.630974    5124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 17:16:13.635180    5124 out.go:235]   - Booting up control plane ...
	I0913 17:16:13.635230    5124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 17:16:13.635369    5124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 17:16:13.635413    5124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 17:16:13.635489    5124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 17:16:13.636105    5124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 17:16:18.138796    5124 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502246 seconds
	I0913 17:16:18.138866    5124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 17:16:18.142560    5124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 17:16:18.657321    5124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 17:16:18.657583    5124 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-714000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 17:16:19.163606    5124 kubeadm.go:310] [bootstrap-token] Using token: o2d0nq.2impy11oz3kcah35
	I0913 17:16:19.166121    5124 out.go:235]   - Configuring RBAC rules ...
	I0913 17:16:19.166244    5124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 17:16:19.166292    5124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 17:16:19.173414    5124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 17:16:19.174664    5124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 17:16:19.175777    5124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 17:16:19.177216    5124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 17:16:19.180504    5124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 17:16:19.353068    5124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 17:16:19.567792    5124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 17:16:19.568271    5124 kubeadm.go:310] 
	I0913 17:16:19.568303    5124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 17:16:19.568306    5124 kubeadm.go:310] 
	I0913 17:16:19.568340    5124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 17:16:19.568343    5124 kubeadm.go:310] 
	I0913 17:16:19.568354    5124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 17:16:19.568389    5124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 17:16:19.568438    5124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 17:16:19.568442    5124 kubeadm.go:310] 
	I0913 17:16:19.568467    5124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 17:16:19.568472    5124 kubeadm.go:310] 
	I0913 17:16:19.568522    5124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 17:16:19.568526    5124 kubeadm.go:310] 
	I0913 17:16:19.568569    5124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 17:16:19.568606    5124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 17:16:19.568659    5124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 17:16:19.568667    5124 kubeadm.go:310] 
	I0913 17:16:19.568732    5124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 17:16:19.568781    5124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 17:16:19.568788    5124 kubeadm.go:310] 
	I0913 17:16:19.568830    5124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2d0nq.2impy11oz3kcah35 \
	I0913 17:16:19.568880    5124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 \
	I0913 17:16:19.568892    5124 kubeadm.go:310] 	--control-plane 
	I0913 17:16:19.568894    5124 kubeadm.go:310] 
	I0913 17:16:19.568931    5124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 17:16:19.568934    5124 kubeadm.go:310] 
	I0913 17:16:19.569026    5124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2d0nq.2impy11oz3kcah35 \
	I0913 17:16:19.569087    5124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 
	I0913 17:16:19.569151    5124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 17:16:19.569158    5124 cni.go:84] Creating CNI manager for ""
	I0913 17:16:19.569166    5124 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:16:19.573342    5124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 17:16:19.583384    5124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 17:16:19.586469    5124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 17:16:19.591155    5124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 17:16:19.591206    5124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 17:16:19.591229    5124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-714000 minikube.k8s.io/updated_at=2024_09_13T17_16_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=running-upgrade-714000 minikube.k8s.io/primary=true
	I0913 17:16:19.632250    5124 ops.go:34] apiserver oom_adj: -16
	I0913 17:16:19.632264    5124 kubeadm.go:1113] duration metric: took 41.103292ms to wait for elevateKubeSystemPrivileges
	I0913 17:16:19.632269    5124 kubeadm.go:394] duration metric: took 4m12.34115075s to StartCluster
	I0913 17:16:19.632279    5124 settings.go:142] acquiring lock: {Name:mk948e653988f014de7183ca44ad61265c2dc06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:16:19.632376    5124 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:16:19.632770    5124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/kubeconfig: {Name:mke2b016812cedc34ffbfc79dbc5c22d8c43c377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:16:19.632946    5124 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:16:19.632957    5124 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 17:16:19.632994    5124 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-714000"
	I0913 17:16:19.633010    5124 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-714000"
	W0913 17:16:19.633014    5124 addons.go:243] addon storage-provisioner should already be in state true
	I0913 17:16:19.633025    5124 host.go:66] Checking if "running-upgrade-714000" exists ...
	I0913 17:16:19.633036    5124 config.go:182] Loaded profile config "running-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:16:19.633042    5124 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-714000"
	I0913 17:16:19.633071    5124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-714000"
	I0913 17:16:19.633907    5124 kapi.go:59] client config for running-upgrade-714000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/client.key", CAFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a69800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 17:16:19.634030    5124 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-714000"
	W0913 17:16:19.634035    5124 addons.go:243] addon default-storageclass should already be in state true
	I0913 17:16:19.634043    5124 host.go:66] Checking if "running-upgrade-714000" exists ...
	I0913 17:16:19.637300    5124 out.go:177] * Verifying Kubernetes components...
	I0913 17:16:19.637605    5124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 17:16:19.641565    5124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 17:16:19.641582    5124 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/running-upgrade-714000/id_rsa Username:docker}
	I0913 17:16:19.645332    5124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:16:19.649262    5124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:16:19.653297    5124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 17:16:19.653304    5124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 17:16:19.653310    5124 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/running-upgrade-714000/id_rsa Username:docker}
	I0913 17:16:19.740415    5124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 17:16:19.745175    5124 api_server.go:52] waiting for apiserver process to appear ...
	I0913 17:16:19.745215    5124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:16:19.749276    5124 api_server.go:72] duration metric: took 116.321291ms to wait for apiserver process to appear ...
	I0913 17:16:19.749285    5124 api_server.go:88] waiting for apiserver healthz status ...
	I0913 17:16:19.749292    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:19.763534    5124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 17:16:19.789442    5124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 17:16:20.098140    5124 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 17:16:20.098153    5124 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 17:16:24.751306    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:24.751348    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:29.751673    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:29.751699    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:34.752010    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:34.752049    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:39.752490    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:39.752536    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:44.753114    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:44.753153    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:49.753946    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:49.754004    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0913 17:16:50.099914    5124 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0913 17:16:50.106071    5124 out.go:177] * Enabled addons: storage-provisioner
	I0913 17:16:50.117026    5124 addons.go:510] duration metric: took 30.484527375s for enable addons: enabled=[storage-provisioner]
	I0913 17:16:54.754589    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:54.754642    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:59.755814    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:59.755867    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:04.757423    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:04.757457    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:09.759357    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:09.759381    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:14.761498    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:14.761517    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:19.762689    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:19.762820    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:19.774081    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:19.774165    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:19.785003    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:19.785084    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:19.795592    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:19.795678    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:19.805947    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:19.806028    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:19.817077    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:19.817157    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:19.827052    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:19.827129    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:19.837669    5124 logs.go:276] 0 containers: []
	W0913 17:17:19.837681    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:19.837747    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:19.848374    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:19.848390    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:19.848396    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:19.887153    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:19.887167    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:19.925318    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:19.925329    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:19.939512    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:19.939524    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:19.953632    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:19.953642    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:19.965511    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:19.965524    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:19.985908    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:19.985919    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:20.002048    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:20.002059    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:20.013944    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:20.013959    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:20.018803    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:20.018809    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:20.030428    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:20.030439    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:20.045530    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:20.045547    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:20.057519    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:20.057533    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:22.582679    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:27.583643    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:27.583775    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:27.598557    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:27.598651    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:27.609560    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:27.609647    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:27.620120    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:27.620202    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:27.630640    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:27.630717    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:27.641073    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:27.641158    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:27.651925    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:27.652000    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:27.662313    5124 logs.go:276] 0 containers: []
	W0913 17:17:27.662325    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:27.662399    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:27.673042    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:27.673061    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:27.673067    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:27.688405    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:27.688419    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:27.703001    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:27.703012    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:27.714445    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:27.714460    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:27.731772    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:27.731783    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:27.743300    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:27.743311    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:27.754866    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:27.754876    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:27.791952    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:27.791961    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:27.797029    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:27.797038    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:27.835453    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:27.835469    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:27.849656    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:27.849667    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:27.863668    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:27.863678    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:27.875671    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:27.875682    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:30.402131    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:35.404266    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:35.404527    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:35.429592    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:35.429765    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:35.450701    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:35.450808    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:35.466238    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:35.466330    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:35.476834    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:35.476919    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:35.487676    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:35.487763    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:35.500545    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:35.500622    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:35.510821    5124 logs.go:276] 0 containers: []
	W0913 17:17:35.510833    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:35.510892    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:35.521234    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:35.521247    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:35.521252    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:35.535061    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:35.535078    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:35.549718    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:35.549729    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:35.563171    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:35.563182    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:35.574585    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:35.574599    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:35.592173    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:35.592184    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:35.603386    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:35.603398    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:35.608018    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:35.608025    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:35.623636    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:35.623647    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:35.639059    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:35.639069    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:35.650584    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:35.650595    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:35.674092    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:35.674102    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:35.712512    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:35.712520    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:38.255610    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:43.257123    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:43.257311    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:43.274355    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:43.274458    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:43.287171    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:43.287264    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:43.298205    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:43.298294    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:43.308505    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:43.308588    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:43.318991    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:43.319074    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:43.330010    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:43.330090    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:43.340338    5124 logs.go:276] 0 containers: []
	W0913 17:17:43.340349    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:43.340418    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:43.351158    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:43.351174    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:43.351180    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:43.363080    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:43.363092    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:43.375199    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:43.375212    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:43.387724    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:43.387736    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:43.411774    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:43.411784    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:43.436413    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:43.436425    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:43.474932    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:43.474943    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:43.510673    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:43.510685    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:43.525234    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:43.525250    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:43.539557    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:43.539570    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:43.555047    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:43.555061    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:43.566479    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:43.566492    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:43.578050    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:43.578060    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:46.082755    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:51.084868    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:51.085048    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:51.096181    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:51.096265    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:51.107491    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:51.107574    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:51.118539    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:51.118621    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:51.130888    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:51.130960    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:51.141932    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:51.142001    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:51.152328    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:51.152398    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:51.162595    5124 logs.go:276] 0 containers: []
	W0913 17:17:51.162610    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:51.162681    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:51.173595    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:51.173613    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:51.173618    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:51.188749    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:51.188764    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:51.200596    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:51.200608    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:51.218289    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:51.218303    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:51.229964    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:51.229976    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:51.266968    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:51.266978    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:51.271942    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:51.271950    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:51.307729    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:51.307742    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:51.319577    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:51.319589    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:51.343131    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:51.343146    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:51.355617    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:51.355630    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:51.374703    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:51.374714    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:51.388739    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:51.388749    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:53.901720    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:58.903916    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:58.904234    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:58.936520    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:58.936679    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:58.956003    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:58.956117    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:58.969695    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:58.969777    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:58.987792    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:58.987878    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:58.998719    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:58.998808    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:59.009750    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:59.009839    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:59.020329    5124 logs.go:276] 0 containers: []
	W0913 17:17:59.020339    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:59.020410    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:59.031512    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:59.031528    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:59.031533    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:59.043470    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:59.043481    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:59.081327    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:59.081340    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:59.100968    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:59.100980    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:59.112443    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:59.112456    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:59.125724    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:59.125733    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:59.140559    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:59.140571    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:59.163348    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:59.163355    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:59.174742    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:59.174753    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:59.179294    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:59.179301    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:59.215147    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:59.215160    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:59.229298    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:59.229312    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:59.240628    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:59.240641    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:01.760309    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:06.762434    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:06.762636    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:06.774782    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:06.774877    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:06.788543    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:06.788640    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:06.799535    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:18:06.799623    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:06.814101    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:06.814187    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:06.824607    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:06.824687    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:06.837601    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:06.837674    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:06.848109    5124 logs.go:276] 0 containers: []
	W0913 17:18:06.848127    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:06.848201    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:06.858742    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:06.858755    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:06.858760    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:06.872923    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:06.872934    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:06.887421    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:06.887432    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:06.902295    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:06.902305    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:06.918233    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:06.918244    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:06.929410    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:06.929419    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:06.963724    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:06.963735    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:06.968230    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:06.968240    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:06.979634    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:06.979644    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:06.991195    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:06.991206    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:07.008493    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:07.008506    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:07.024102    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:07.024111    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:07.049583    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:07.049600    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:09.591335    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:14.593557    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:14.593861    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:14.620996    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:14.621133    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:14.637468    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:14.637566    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:14.650841    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:18:14.650926    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:14.663063    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:14.663148    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:14.674035    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:14.674122    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:14.684997    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:14.685082    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:14.695488    5124 logs.go:276] 0 containers: []
	W0913 17:18:14.695499    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:14.695572    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:14.705862    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:14.705880    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:14.705886    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:14.721628    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:14.721667    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:14.733292    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:14.733303    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:14.748215    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:14.748227    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:14.786510    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:14.786521    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:14.790919    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:14.790926    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:14.808822    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:14.808832    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:14.820402    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:14.820412    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:14.832390    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:14.832401    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:14.850606    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:14.850622    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:14.862796    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:14.862808    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:14.887672    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:14.887686    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:14.924158    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:14.924168    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:17.437332    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:22.439668    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:22.439862    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:22.455358    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:22.455447    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:22.467584    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:22.467673    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:22.478277    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:18:22.478352    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:22.492260    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:22.492342    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:22.502977    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:22.503068    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:22.513746    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:22.513826    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:22.523626    5124 logs.go:276] 0 containers: []
	W0913 17:18:22.523642    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:22.523710    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:22.534175    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:22.534192    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:22.534198    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:22.552160    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:22.552176    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:22.563523    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:22.563539    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:22.575234    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:22.575246    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:22.592435    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:22.592448    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:22.629869    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:22.629882    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:22.634376    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:22.634384    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:22.669902    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:22.669912    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:22.685514    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:22.685524    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:22.701868    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:22.701884    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:22.713590    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:22.713602    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:22.725040    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:22.725051    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:22.740791    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:22.740802    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:25.267630    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:30.269789    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:30.269962    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:30.286047    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:30.286149    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:30.297827    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:30.297917    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:30.312328    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:18:30.312415    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:30.322359    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:30.322440    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:30.333090    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:30.333176    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:30.344397    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:30.344475    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:30.354178    5124 logs.go:276] 0 containers: []
	W0913 17:18:30.354189    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:30.354264    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:30.364477    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:30.364492    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:30.364499    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:30.379303    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:30.379314    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:30.390906    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:30.390920    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:30.405593    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:30.405604    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:30.417489    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:30.417499    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:30.440992    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:30.441003    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:30.452333    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:30.452343    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:30.475959    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:30.475972    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:30.480373    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:30.480382    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:30.492031    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:30.492044    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:30.532687    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:30.532703    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:30.547540    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:30.547555    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:30.560224    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:30.560234    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:33.100277    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:38.102405    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:38.102518    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:38.115977    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:38.116070    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:38.133181    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:38.133263    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:38.143626    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:18:38.143718    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:38.154336    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:38.154424    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:38.165071    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:38.165147    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:38.181514    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:38.181596    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:38.192113    5124 logs.go:276] 0 containers: []
	W0913 17:18:38.192130    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:38.192196    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:38.203559    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:38.203584    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:38.203590    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:38.217753    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:18:38.217766    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:18:38.230409    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:38.230421    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:38.246449    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:38.246460    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:38.258395    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:38.258406    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:38.270742    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:38.270756    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:38.275780    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:18:38.275789    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:18:38.287046    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:38.287059    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:38.299316    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:38.299327    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:38.324450    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:38.324460    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:38.337822    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:38.337835    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:38.358057    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:38.358068    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:38.393769    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:38.393779    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:38.409014    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:38.409029    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:38.427415    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:38.427429    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:40.968605    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:45.970910    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:45.971154    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:45.988472    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:45.988574    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:46.001094    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:46.001187    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:46.012316    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:18:46.012405    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:46.022964    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:46.023044    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:46.033687    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:46.033770    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:46.043885    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:46.043966    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:46.054746    5124 logs.go:276] 0 containers: []
	W0913 17:18:46.054759    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:46.054833    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:46.067694    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:46.067713    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:46.067719    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:46.081482    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:46.081494    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:46.104633    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:46.104642    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:46.140734    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:46.140744    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:46.176567    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:18:46.176578    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:18:46.188458    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:46.188469    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:46.200278    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:46.200293    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:46.219121    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:46.219132    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:46.233756    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:46.233768    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:46.247985    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:18:46.247998    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:18:46.259830    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:46.259844    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:46.284223    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:46.284237    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:46.303540    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:46.303551    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:46.315386    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:46.315397    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:46.320178    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:46.320189    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:48.834228    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:53.836800    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:53.836967    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:53.853227    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:53.853330    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:53.867200    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:53.867290    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:53.878071    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:18:53.878156    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:53.888777    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:53.888861    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:53.900158    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:53.900233    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:53.910887    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:53.910966    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:53.921871    5124 logs.go:276] 0 containers: []
	W0913 17:18:53.921884    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:53.921956    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:53.932596    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:53.932615    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:53.932621    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:53.967157    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:53.967168    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:53.984973    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:18:53.984984    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:18:53.997192    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:53.997203    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:54.009299    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:54.009311    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:54.024405    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:54.024418    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:54.036708    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:54.036721    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:54.057234    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:54.057245    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:54.069023    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:54.069033    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:54.107150    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:54.107161    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:54.111908    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:54.111914    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:54.125525    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:18:54.125538    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:18:54.142995    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:54.143011    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:54.154849    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:54.154862    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:54.178265    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:54.178275    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:56.691499    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:01.693802    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:01.694129    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:01.723241    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:01.723404    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:01.741385    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:01.741467    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:01.758827    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:01.758923    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:01.769726    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:01.769799    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:01.780355    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:01.780444    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:01.791074    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:01.791158    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:01.801466    5124 logs.go:276] 0 containers: []
	W0913 17:19:01.801478    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:01.801548    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:01.813142    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:01.813159    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:01.813165    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:01.828069    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:01.828080    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:01.843351    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:01.843362    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:01.847996    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:01.848002    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:01.862489    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:01.862502    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:01.875653    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:01.875664    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:01.887237    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:01.887249    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:01.923109    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:01.923124    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:01.935291    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:01.935303    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:01.959197    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:01.959210    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:01.971161    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:01.971177    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:01.988956    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:01.988968    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:02.027824    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:02.027839    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:02.042631    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:02.042642    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:02.054494    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:02.054507    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:04.570442    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:09.572682    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:09.572820    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:09.584637    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:09.584724    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:09.595615    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:09.595702    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:09.606827    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:09.606908    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:09.630591    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:09.630690    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:09.641459    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:09.641546    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:09.652388    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:09.652474    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:09.662600    5124 logs.go:276] 0 containers: []
	W0913 17:19:09.662612    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:09.662683    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:09.673265    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:09.673284    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:09.673290    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:09.685534    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:09.685547    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:09.699633    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:09.699644    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:09.715608    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:09.715618    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:09.726957    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:09.726968    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:09.739045    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:09.739059    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:09.778057    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:09.778070    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:09.783120    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:09.783127    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:09.795627    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:09.795642    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:09.820470    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:09.820481    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:09.832745    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:09.832762    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:09.845122    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:09.845136    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:09.860830    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:09.860859    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:09.873134    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:09.873150    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:09.890951    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:09.890961    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:12.427210    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:17.429845    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:17.430194    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:17.460422    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:17.460541    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:17.477241    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:17.477359    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:17.491568    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:17.491659    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:17.503963    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:17.504041    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:17.514265    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:17.514355    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:17.525081    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:17.525163    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:17.535692    5124 logs.go:276] 0 containers: []
	W0913 17:19:17.535705    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:17.535772    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:17.546290    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:17.546310    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:17.546318    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:17.565168    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:17.565185    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:17.570213    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:17.570222    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:17.582390    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:17.582401    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:17.594567    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:17.594583    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:17.610086    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:17.610102    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:17.622019    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:17.622030    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:17.659486    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:17.659496    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:17.671286    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:17.671297    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:17.691759    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:17.691769    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:17.709314    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:17.709325    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:17.734005    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:17.734012    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:17.746003    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:17.746012    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:17.780178    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:17.780190    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:17.792819    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:17.792831    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:20.313734    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:25.316082    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:25.316302    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:25.333332    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:25.333433    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:25.347258    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:25.347350    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:25.358923    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:25.359009    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:25.369638    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:25.369708    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:25.380774    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:25.380859    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:25.390955    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:25.391034    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:25.401395    5124 logs.go:276] 0 containers: []
	W0913 17:19:25.401405    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:25.401467    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:25.411882    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:25.411901    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:25.411908    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:25.424023    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:25.424034    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:25.448514    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:25.448538    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:25.462981    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:25.462992    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:25.477368    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:25.477381    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:25.489122    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:25.489136    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:25.501048    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:25.501060    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:25.505462    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:25.505471    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:25.520779    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:25.520791    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:25.542197    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:25.542208    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:25.560127    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:25.560139    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:25.577386    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:25.577398    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:25.612977    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:25.612988    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:25.625282    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:25.625294    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:25.636990    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:25.637001    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:28.177023    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:33.178489    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:33.178632    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:33.191823    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:33.191920    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:33.210803    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:33.210879    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:33.222269    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:33.222350    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:33.232847    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:33.232917    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:33.243224    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:33.243315    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:33.261508    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:33.261590    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:33.271533    5124 logs.go:276] 0 containers: []
	W0913 17:19:33.271545    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:33.271616    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:33.282216    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:33.282234    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:33.282240    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:33.300167    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:33.300180    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:33.312098    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:33.312110    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:33.338372    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:33.338383    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:33.350673    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:33.350685    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:33.364777    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:33.364790    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:33.381204    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:33.381214    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:33.394605    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:33.394621    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:33.400058    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:33.400068    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:33.413255    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:33.413271    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:33.424906    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:33.424917    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:33.450036    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:33.450075    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:33.489056    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:33.489068    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:33.503273    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:33.503284    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:33.515163    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:33.515176    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:36.054067    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:41.056407    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:41.056705    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:41.082729    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:41.082851    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:41.098983    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:41.099083    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:41.114515    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:41.114600    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:41.126066    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:41.126137    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:41.138075    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:41.138146    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:41.149573    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:41.149661    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:41.159471    5124 logs.go:276] 0 containers: []
	W0913 17:19:41.159486    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:41.159559    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:41.171865    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:41.171886    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:41.171892    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:41.176524    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:41.176531    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:41.211864    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:41.211876    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:41.224039    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:41.224052    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:41.235309    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:41.235325    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:41.246783    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:41.246794    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:41.265194    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:41.265207    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:41.279259    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:41.279273    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:41.293120    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:41.293131    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:41.305223    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:41.305234    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:41.319914    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:41.319929    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:41.331315    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:41.331326    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:41.370678    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:41.370689    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:41.382558    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:41.382572    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:41.400477    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:41.400488    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:43.927143    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:48.929053    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:48.929263    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:48.949241    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:48.949354    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:48.964417    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:48.964523    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:48.976868    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:48.976957    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:48.987456    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:48.987546    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:48.998773    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:48.998853    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:49.017249    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:49.017329    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:49.027600    5124 logs.go:276] 0 containers: []
	W0913 17:19:49.027613    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:49.027684    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:49.039155    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:49.039183    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:49.039188    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:49.050522    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:49.050533    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:49.066881    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:49.066893    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:49.090566    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:49.090577    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:49.126227    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:49.126239    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:49.138377    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:49.138392    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:49.149941    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:49.149955    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:49.162209    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:49.162224    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:49.178165    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:49.178177    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:49.195900    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:49.195913    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:49.207290    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:49.207302    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:49.212104    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:49.212112    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:49.226170    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:49.226181    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:49.238383    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:49.238396    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:49.277447    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:49.277464    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:51.798798    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:56.800514    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:56.800650    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:56.812185    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:56.812269    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:56.824041    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:56.824124    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:56.837272    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:56.837362    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:56.848094    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:56.848184    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:56.859144    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:56.859216    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:56.870232    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:56.870320    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:56.885340    5124 logs.go:276] 0 containers: []
	W0913 17:19:56.885353    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:56.885430    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:56.896991    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:56.897009    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:56.897015    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:56.936222    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:56.936242    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:56.940987    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:56.940996    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:56.977546    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:56.977558    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:56.997123    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:56.997140    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:57.009304    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:57.009317    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:57.022665    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:57.022677    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:57.039337    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:57.039348    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:57.065334    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:57.065347    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:57.081318    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:57.081335    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:57.094574    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:57.094589    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:57.107468    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:57.107481    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:57.124618    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:57.124633    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:57.144524    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:57.144542    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:57.156836    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:57.156848    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:59.671098    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:04.671644    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:04.672161    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:20:04.708370    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:20:04.708524    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:20:04.726018    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:20:04.726126    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:20:04.740412    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:20:04.740506    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:20:04.752300    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:20:04.752371    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:20:04.763302    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:20:04.763383    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:20:04.774853    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:20:04.774938    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:20:04.788665    5124 logs.go:276] 0 containers: []
	W0913 17:20:04.788677    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:20:04.788752    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:20:04.799409    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:20:04.799431    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:20:04.799437    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:20:04.811923    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:20:04.811934    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:20:04.829639    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:20:04.829653    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:20:04.841433    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:20:04.841445    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:20:04.853923    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:20:04.853936    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:20:04.868769    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:20:04.868785    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:20:04.873415    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:20:04.873423    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:20:04.907795    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:20:04.907810    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:20:04.920454    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:20:04.920470    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:20:04.945562    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:20:04.945570    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:20:04.983134    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:20:04.983143    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:20:04.994864    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:20:04.994875    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:20:05.010687    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:20:05.010698    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:20:05.022884    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:20:05.022895    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:20:05.037651    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:20:05.037660    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:20:07.550406    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:12.552749    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:12.552914    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:20:12.563897    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:20:12.563978    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:20:12.575074    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:20:12.575158    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:20:12.586641    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:20:12.586717    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:20:12.597379    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:20:12.597465    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:20:12.608157    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:20:12.608244    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:20:12.618891    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:20:12.618980    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:20:12.630060    5124 logs.go:276] 0 containers: []
	W0913 17:20:12.630074    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:20:12.630147    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:20:12.640495    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:20:12.640512    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:20:12.640518    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:20:12.653090    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:20:12.653106    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:20:12.668943    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:20:12.668955    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:20:12.680877    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:20:12.680888    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:20:12.692841    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:20:12.692852    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:20:12.704692    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:20:12.704702    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:20:12.728647    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:20:12.728659    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:20:12.765969    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:20:12.765978    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:20:12.801271    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:20:12.801286    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:20:12.815937    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:20:12.815949    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:20:12.838930    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:20:12.838942    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:20:12.850519    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:20:12.850530    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:20:12.855031    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:20:12.855039    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:20:12.869854    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:20:12.869866    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:20:12.887945    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:20:12.887956    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:20:15.402152    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:20.404395    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:20.408561    5124 out.go:201] 
	W0913 17:20:20.411327    5124 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0913 17:20:20.411337    5124 out.go:270] * 
	* 
	W0913 17:20:20.411980    5124 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:20:20.427292    5124 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-714000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-09-13 17:20:20.516251 -0700 PDT m=+3288.027884543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-714000 -n running-upgrade-714000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-714000 -n running-upgrade-714000: exit status 2 (15.708839333s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-714000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-300000          | force-systemd-flag-300000 | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-453000              | force-systemd-env-453000  | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-453000           | force-systemd-env-453000  | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT | 13 Sep 24 17:10 PDT |
	| start   | -p docker-flags-124000                | docker-flags-124000       | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-300000             | force-systemd-flag-300000 | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-300000          | force-systemd-flag-300000 | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT | 13 Sep 24 17:10 PDT |
	| start   | -p cert-expiration-955000             | cert-expiration-955000    | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-124000 ssh               | docker-flags-124000       | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-124000 ssh               | docker-flags-124000       | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-124000                | docker-flags-124000       | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT | 13 Sep 24 17:10 PDT |
	| start   | -p cert-options-905000                | cert-options-905000       | jenkins | v1.34.0 | 13 Sep 24 17:10 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-905000 ssh               | cert-options-905000       | jenkins | v1.34.0 | 13 Sep 24 17:11 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-905000 -- sudo        | cert-options-905000       | jenkins | v1.34.0 | 13 Sep 24 17:11 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-905000                | cert-options-905000       | jenkins | v1.34.0 | 13 Sep 24 17:11 PDT | 13 Sep 24 17:11 PDT |
	| start   | -p running-upgrade-714000             | minikube                  | jenkins | v1.26.0 | 13 Sep 24 17:11 PDT | 13 Sep 24 17:11 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-714000             | running-upgrade-714000    | jenkins | v1.34.0 | 13 Sep 24 17:11 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-955000             | cert-expiration-955000    | jenkins | v1.34.0 | 13 Sep 24 17:14 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-955000             | cert-expiration-955000    | jenkins | v1.34.0 | 13 Sep 24 17:14 PDT | 13 Sep 24 17:14 PDT |
	| start   | -p kubernetes-upgrade-171000          | kubernetes-upgrade-171000 | jenkins | v1.34.0 | 13 Sep 24 17:14 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-171000          | kubernetes-upgrade-171000 | jenkins | v1.34.0 | 13 Sep 24 17:14 PDT | 13 Sep 24 17:14 PDT |
	| start   | -p kubernetes-upgrade-171000          | kubernetes-upgrade-171000 | jenkins | v1.34.0 | 13 Sep 24 17:14 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-171000          | kubernetes-upgrade-171000 | jenkins | v1.34.0 | 13 Sep 24 17:14 PDT | 13 Sep 24 17:14 PDT |
	| start   | -p stopped-upgrade-434000             | minikube                  | jenkins | v1.26.0 | 13 Sep 24 17:14 PDT | 13 Sep 24 17:15 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-434000 stop           | minikube                  | jenkins | v1.26.0 | 13 Sep 24 17:15 PDT | 13 Sep 24 17:15 PDT |
	| start   | -p stopped-upgrade-434000             | stopped-upgrade-434000    | jenkins | v1.34.0 | 13 Sep 24 17:15 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 17:15:19
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 17:15:19.985900    5271 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:15:19.986089    5271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:15:19.986094    5271 out.go:358] Setting ErrFile to fd 2...
	I0913 17:15:19.986097    5271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:15:19.986274    5271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:15:19.987676    5271 out.go:352] Setting JSON to false
	I0913 17:15:20.007126    5271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4483,"bootTime":1726268436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:15:20.007208    5271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:15:20.010630    5271 out.go:177] * [stopped-upgrade-434000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:15:20.017576    5271 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:15:20.017614    5271 notify.go:220] Checking for updates...
	I0913 17:15:20.024526    5271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:15:20.027513    5271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:15:20.031541    5271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:15:20.034530    5271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:15:20.037556    5271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:15:20.040800    5271 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:15:20.043513    5271 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 17:15:20.046585    5271 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:15:20.050515    5271 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:15:20.057505    5271 start.go:297] selected driver: qemu2
	I0913 17:15:20.057510    5271 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 17:15:20.057557    5271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:15:20.060154    5271 cni.go:84] Creating CNI manager for ""
	I0913 17:15:20.060187    5271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:15:20.060212    5271 start.go:340] cluster config:
	{Name:stopped-upgrade-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 17:15:20.060261    5271 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:15:20.068497    5271 out.go:177] * Starting "stopped-upgrade-434000" primary control-plane node in "stopped-upgrade-434000" cluster
	I0913 17:15:20.072573    5271 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 17:15:20.072588    5271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0913 17:15:20.072598    5271 cache.go:56] Caching tarball of preloaded images
	I0913 17:15:20.072659    5271 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:15:20.072665    5271 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0913 17:15:20.072716    5271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/config.json ...
	I0913 17:15:20.073192    5271 start.go:360] acquireMachinesLock for stopped-upgrade-434000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:15:20.073225    5271 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "stopped-upgrade-434000"
	I0913 17:15:20.073232    5271 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:15:20.073238    5271 fix.go:54] fixHost starting: 
	I0913 17:15:20.073335    5271 fix.go:112] recreateIfNeeded on stopped-upgrade-434000: state=Stopped err=<nil>
	W0913 17:15:20.073342    5271 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:15:20.081548    5271 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-434000" ...
	I0913 17:15:19.284438    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:20.085412    5271 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:15:20.085479    5271 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50468-:22,hostfwd=tcp::50469-:2376,hostname=stopped-upgrade-434000 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/disk.qcow2
	I0913 17:15:20.131839    5271 main.go:141] libmachine: STDOUT: 
	I0913 17:15:20.131865    5271 main.go:141] libmachine: STDERR: 
	I0913 17:15:20.131871    5271 main.go:141] libmachine: Waiting for VM to start (ssh -p 50468 docker@127.0.0.1)...
	I0913 17:15:24.286588    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:24.286847    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:24.307175    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:24.307310    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:24.321287    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:24.321381    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:24.333257    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:24.333344    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:24.343971    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:24.344057    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:24.354392    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:24.354469    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:24.369522    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:24.369598    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:24.380056    5124 logs.go:276] 0 containers: []
	W0913 17:15:24.380068    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:24.380140    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:24.390271    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:24.390290    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:24.390295    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:24.424801    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:24.424810    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:24.438108    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:24.438121    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:24.459364    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:24.459376    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:24.470157    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:24.470168    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:24.474673    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:24.474682    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:24.491508    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:24.491519    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:24.517993    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:24.518007    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:24.529982    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:24.529996    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:24.543208    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:24.543218    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:24.558803    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:24.558815    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:24.575767    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:24.575778    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:24.592152    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:24.592165    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:24.625680    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:24.625693    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:24.647126    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:24.647136    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:24.658607    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:24.658619    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:24.673518    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:24.673530    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:27.199904    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:32.202179    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:32.202361    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:32.222427    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:32.222528    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:32.236302    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:32.236390    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:32.247282    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:32.247364    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:32.257895    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:32.257978    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:32.268989    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:32.269078    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:32.280377    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:32.280460    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:32.309177    5124 logs.go:276] 0 containers: []
	W0913 17:15:32.309217    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:32.309301    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:32.326608    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:32.326628    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:32.326634    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:32.349733    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:32.349744    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:32.363410    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:32.363422    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:32.378509    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:32.378525    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:32.390764    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:32.390777    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:32.403138    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:32.403152    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:32.442920    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:32.442932    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:32.461026    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:32.461039    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:32.478680    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:32.478697    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:32.490606    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:32.490617    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:32.527948    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:32.527962    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:32.542367    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:32.542377    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:32.557408    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:32.557418    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:32.581408    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:32.581418    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:32.586206    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:32.586215    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:32.606398    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:32.606409    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:32.621922    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:32.621936    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:35.135836    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:39.616795    5271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/config.json ...
	I0913 17:15:39.617180    5271 machine.go:93] provisionDockerMachine start ...
	I0913 17:15:39.617266    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:39.617527    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:39.617538    5271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 17:15:39.690882    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 17:15:39.690900    5271 buildroot.go:166] provisioning hostname "stopped-upgrade-434000"
	I0913 17:15:39.690980    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:39.691141    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:39.691153    5271 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-434000 && echo "stopped-upgrade-434000" | sudo tee /etc/hostname
	I0913 17:15:39.763986    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-434000
	
	I0913 17:15:39.764055    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:39.764232    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:39.764247    5271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-434000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-434000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-434000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 17:15:39.832330    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 17:15:39.832346    5271 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19640-1360/.minikube CaCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19640-1360/.minikube}
	I0913 17:15:39.832359    5271 buildroot.go:174] setting up certificates
	I0913 17:15:39.832369    5271 provision.go:84] configureAuth start
	I0913 17:15:39.832374    5271 provision.go:143] copyHostCerts
	I0913 17:15:39.832453    5271 exec_runner.go:144] found /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem, removing ...
	I0913 17:15:39.832462    5271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem
	I0913 17:15:39.832559    5271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem (1078 bytes)
	I0913 17:15:39.832732    5271 exec_runner.go:144] found /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem, removing ...
	I0913 17:15:39.832737    5271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem
	I0913 17:15:39.832782    5271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem (1123 bytes)
	I0913 17:15:39.832882    5271 exec_runner.go:144] found /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem, removing ...
	I0913 17:15:39.832885    5271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem
	I0913 17:15:39.832927    5271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem (1679 bytes)
	I0913 17:15:39.833072    5271 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-434000 san=[127.0.0.1 localhost minikube stopped-upgrade-434000]
	I0913 17:15:39.895458    5271 provision.go:177] copyRemoteCerts
	I0913 17:15:39.895494    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 17:15:39.895503    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:15:39.931964    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 17:15:39.938977    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 17:15:39.945469    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 17:15:39.954609    5271 provision.go:87] duration metric: took 122.231917ms to configureAuth
	I0913 17:15:39.954619    5271 buildroot.go:189] setting minikube options for container-runtime
	I0913 17:15:39.954737    5271 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:15:39.954787    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:39.954880    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:39.954885    5271 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 17:15:40.018308    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0913 17:15:40.018320    5271 buildroot.go:70] root file system type: tmpfs
	I0913 17:15:40.018371    5271 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 17:15:40.018422    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:40.018528    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:40.018561    5271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 17:15:40.085798    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 17:15:40.085856    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:40.085966    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:40.085975    5271 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 17:15:40.457857    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0913 17:15:40.457873    5271 machine.go:96] duration metric: took 840.695959ms to provisionDockerMachine
	I0913 17:15:40.457895    5271 start.go:293] postStartSetup for "stopped-upgrade-434000" (driver="qemu2")
	I0913 17:15:40.457905    5271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 17:15:40.457964    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 17:15:40.457972    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:15:40.492724    5271 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 17:15:40.494190    5271 info.go:137] Remote host: Buildroot 2021.02.12
	I0913 17:15:40.494199    5271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19640-1360/.minikube/addons for local assets ...
	I0913 17:15:40.494286    5271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19640-1360/.minikube/files for local assets ...
	I0913 17:15:40.494415    5271 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem -> 18822.pem in /etc/ssl/certs
	I0913 17:15:40.494559    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 17:15:40.497281    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem --> /etc/ssl/certs/18822.pem (1708 bytes)
	I0913 17:15:40.504640    5271 start.go:296] duration metric: took 46.734334ms for postStartSetup
	I0913 17:15:40.504661    5271 fix.go:56] duration metric: took 20.431731167s for fixHost
	I0913 17:15:40.504723    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:40.504847    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:40.504853    5271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 17:15:40.569450    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726272940.665142712
	
	I0913 17:15:40.569461    5271 fix.go:216] guest clock: 1726272940.665142712
	I0913 17:15:40.569470    5271 fix.go:229] Guest: 2024-09-13 17:15:40.665142712 -0700 PDT Remote: 2024-09-13 17:15:40.504663 -0700 PDT m=+20.548086001 (delta=160.479712ms)
	I0913 17:15:40.569482    5271 fix.go:200] guest clock delta is within tolerance: 160.479712ms
	I0913 17:15:40.569485    5271 start.go:83] releasing machines lock for "stopped-upgrade-434000", held for 20.496563042s
	I0913 17:15:40.569568    5271 ssh_runner.go:195] Run: cat /version.json
	I0913 17:15:40.569577    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:15:40.569568    5271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 17:15:40.569607    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	W0913 17:15:40.570292    5271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50595->127.0.0.1:50468: write: broken pipe
	I0913 17:15:40.570308    5271 retry.go:31] will retry after 146.226762ms: ssh: handshake failed: write tcp 127.0.0.1:50595->127.0.0.1:50468: write: broken pipe
	W0913 17:15:40.601685    5271 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0913 17:15:40.601732    5271 ssh_runner.go:195] Run: systemctl --version
	I0913 17:15:40.603516    5271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 17:15:40.605109    5271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 17:15:40.605142    5271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0913 17:15:40.608331    5271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0913 17:15:40.612890    5271 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 17:15:40.612899    5271 start.go:495] detecting cgroup driver to use...
	I0913 17:15:40.612978    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 17:15:40.618886    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0913 17:15:40.622166    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 17:15:40.625112    5271 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 17:15:40.625141    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 17:15:40.628490    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 17:15:40.631646    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 17:15:40.634452    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 17:15:40.637191    5271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 17:15:40.640515    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 17:15:40.643755    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 17:15:40.646629    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 17:15:40.649508    5271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 17:15:40.652798    5271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 17:15:40.655883    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:40.725868    5271 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 17:15:40.734632    5271 start.go:495] detecting cgroup driver to use...
	I0913 17:15:40.734712    5271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 17:15:40.740689    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 17:15:40.746036    5271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 17:15:40.757120    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 17:15:40.797277    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 17:15:40.802100    5271 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0913 17:15:40.863152    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 17:15:40.868567    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 17:15:40.874430    5271 ssh_runner.go:195] Run: which cri-dockerd
	I0913 17:15:40.875584    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 17:15:40.878017    5271 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0913 17:15:40.882645    5271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 17:15:40.965661    5271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 17:15:41.045695    5271 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 17:15:41.045755    5271 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 17:15:41.050983    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:41.131973    5271 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 17:15:42.290854    5271 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158880708s)
	I0913 17:15:42.290922    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 17:15:42.295470    5271 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0913 17:15:42.301380    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 17:15:42.306128    5271 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 17:15:42.384907    5271 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 17:15:42.469943    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:42.549101    5271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 17:15:42.555551    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 17:15:42.559816    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:42.626376    5271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 17:15:42.664839    5271 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 17:15:42.664923    5271 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 17:15:42.667499    5271 start.go:563] Will wait 60s for crictl version
	I0913 17:15:42.667553    5271 ssh_runner.go:195] Run: which crictl
	I0913 17:15:42.668929    5271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 17:15:42.683430    5271 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0913 17:15:42.683506    5271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 17:15:42.702185    5271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 17:15:40.138017    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:40.138175    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:40.149575    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:40.149654    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:40.160542    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:40.160626    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:40.171185    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:40.171256    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:40.181898    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:40.181984    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:40.192576    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:40.192664    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:40.203988    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:40.204069    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:40.214167    5124 logs.go:276] 0 containers: []
	W0913 17:15:40.214178    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:40.214250    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:40.225000    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:40.225019    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:40.225025    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:40.242363    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:40.242376    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:40.277678    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:40.277693    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:40.282707    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:40.282715    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:40.297524    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:40.297540    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:40.315542    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:40.315559    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:40.334470    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:40.334487    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:40.360074    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:40.360089    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:40.375030    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:40.375042    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:40.398677    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:40.398695    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:40.411402    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:40.411413    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:40.427748    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:40.427760    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:40.443151    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:40.443170    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:40.460079    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:40.460090    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:40.472603    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:40.472615    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:40.510803    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:40.510817    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:40.526400    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:40.526411    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:43.039281    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:42.723581    5271 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0913 17:15:42.723736    5271 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0913 17:15:42.725122    5271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 17:15:42.729124    5271 kubeadm.go:883] updating cluster {Name:stopped-upgrade-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0913 17:15:42.729179    5271 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 17:15:42.729229    5271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 17:15:42.742879    5271 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 17:15:42.742889    5271 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 17:15:42.742937    5271 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 17:15:42.745877    5271 ssh_runner.go:195] Run: which lz4
	I0913 17:15:42.747187    5271 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 17:15:42.748375    5271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 17:15:42.748385    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0913 17:15:43.642872    5271 docker.go:649] duration metric: took 895.743959ms to copy over tarball
	I0913 17:15:43.642935    5271 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 17:15:44.807380    5271 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164436208s)
	I0913 17:15:44.807404    5271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 17:15:44.823160    5271 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 17:15:44.826039    5271 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0913 17:15:44.830824    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:44.909372    5271 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 17:15:48.041404    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:48.041506    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:48.052903    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:48.052988    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:48.065052    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:48.065136    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:48.077891    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:48.077976    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:48.090520    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:48.090598    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:48.102338    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:48.102418    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:48.114066    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:48.114154    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:48.129265    5124 logs.go:276] 0 containers: []
	W0913 17:15:48.129277    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:48.129351    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:48.141360    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:48.141386    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:48.141392    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:48.157862    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:48.157876    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:48.175036    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:48.175051    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:48.189800    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:48.189811    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:48.225038    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:48.225050    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:48.243178    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:48.243189    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:48.256943    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:48.256955    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:48.272290    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:48.272301    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:48.310941    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:48.310955    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:48.330385    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:48.330394    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:48.348762    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:48.348779    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:48.373609    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:48.373629    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:48.386392    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:48.386403    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:48.398262    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:48.398273    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:48.419711    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:48.419727    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:48.435404    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:48.435420    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:48.447352    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:48.447365    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:47.459094    5271 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.549738958s)
	I0913 17:15:47.459202    5271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 17:15:47.472393    5271 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 17:15:47.472413    5271 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 17:15:47.472418    5271 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 17:15:47.481594    5271 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:47.483840    5271 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.484686    5271 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:47.484738    5271 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.487208    5271 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.487561    5271 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.488776    5271 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.488843    5271 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.490080    5271 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:47.490334    5271 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.491275    5271 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0913 17:15:47.491636    5271 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.492404    5271 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:47.492458    5271 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:47.493206    5271 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0913 17:15:47.493837    5271 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:47.924731    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.932159    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.937188    5271 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0913 17:15:47.937216    5271 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.937283    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.949377    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.951227    5271 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0913 17:15:47.951247    5271 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.951290    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.952889    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.952996    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0913 17:15:47.966741    5271 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0913 17:15:47.966765    5271 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.966831    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.969294    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:47.972792    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0913 17:15:47.975110    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0913 17:15:47.975121    5271 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0913 17:15:47.975177    5271 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.975207    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.986621    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0913 17:15:47.986629    5271 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0913 17:15:47.986645    5271 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:47.986703    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:48.000785    5271 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0913 17:15:48.000803    5271 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0913 17:15:48.000870    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0913 17:15:48.001892    5271 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0913 17:15:48.001999    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:48.007921    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0913 17:15:48.007978    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0913 17:15:48.016370    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0913 17:15:48.016498    5271 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0913 17:15:48.017973    5271 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0913 17:15:48.017994    5271 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:48.018047    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:48.019895    5271 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0913 17:15:48.019913    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0913 17:15:48.027139    5271 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0913 17:15:48.027152    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0913 17:15:48.031606    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0913 17:15:48.031736    5271 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0913 17:15:48.056336    5271 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0913 17:15:48.056376    5271 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0913 17:15:48.056401    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0913 17:15:48.102152    5271 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0913 17:15:48.102181    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0913 17:15:48.142131    5271 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0913 17:15:48.267548    5271 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0913 17:15:48.267670    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:48.278781    5271 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0913 17:15:48.278806    5271 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:48.278873    5271 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:48.294029    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 17:15:48.294166    5271 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 17:15:48.295493    5271 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0913 17:15:48.295507    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0913 17:15:48.328759    5271 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 17:15:48.328773    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0913 17:15:48.580421    5271 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 17:15:48.580466    5271 cache_images.go:92] duration metric: took 1.108058708s to LoadCachedImages
	W0913 17:15:48.580507    5271 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0913 17:15:48.580512    5271 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0913 17:15:48.580574    5271 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-434000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 17:15:48.580648    5271 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 17:15:48.593873    5271 cni.go:84] Creating CNI manager for ""
	I0913 17:15:48.593886    5271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:15:48.593894    5271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 17:15:48.593910    5271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-434000 NodeName:stopped-upgrade-434000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 17:15:48.593976    5271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-434000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 17:15:48.594043    5271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0913 17:15:48.596760    5271 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 17:15:48.596792    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 17:15:48.599599    5271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0913 17:15:48.604569    5271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 17:15:48.609483    5271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0913 17:15:48.614828    5271 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0913 17:15:48.615960    5271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 17:15:48.619817    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:48.696213    5271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 17:15:48.702680    5271 certs.go:68] Setting up /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000 for IP: 10.0.2.15
	I0913 17:15:48.702689    5271 certs.go:194] generating shared ca certs ...
	I0913 17:15:48.702698    5271 certs.go:226] acquiring lock for ca certs: {Name:mka1fd556c9b3f29c4a4f622bab1c9ab3ca42c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:15:48.702872    5271 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key
	I0913 17:15:48.702927    5271 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key
	I0913 17:15:48.702934    5271 certs.go:256] generating profile certs ...
	I0913 17:15:48.703007    5271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.key
	I0913 17:15:48.703025    5271 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key.80b5d6c6
	I0913 17:15:48.703037    5271 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt.80b5d6c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0913 17:15:48.840023    5271 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt.80b5d6c6 ...
	I0913 17:15:48.840036    5271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt.80b5d6c6: {Name:mkb5c88ac1f7f13f2e6e0a96f7a3818c09276c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:15:48.840350    5271 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key.80b5d6c6 ...
	I0913 17:15:48.840356    5271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key.80b5d6c6: {Name:mk4fc2536626eac333b238412708d9e9a1843fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:15:48.840485    5271 certs.go:381] copying /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt.80b5d6c6 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt
	I0913 17:15:48.840694    5271 certs.go:385] copying /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key.80b5d6c6 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key
	I0913 17:15:48.840863    5271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/proxy-client.key
	I0913 17:15:48.840997    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/1882.pem (1338 bytes)
	W0913 17:15:48.841025    5271 certs.go:480] ignoring /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/1882_empty.pem, impossibly tiny 0 bytes
	I0913 17:15:48.841043    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 17:15:48.841077    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem (1078 bytes)
	I0913 17:15:48.841098    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem (1123 bytes)
	I0913 17:15:48.841119    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem (1679 bytes)
	I0913 17:15:48.841165    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem (1708 bytes)
	I0913 17:15:48.841483    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 17:15:48.848693    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 17:15:48.856180    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 17:15:48.863631    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 17:15:48.871352    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 17:15:48.878629    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 17:15:48.885151    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 17:15:48.892508    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 17:15:48.899865    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 17:15:48.906840    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/1882.pem --> /usr/share/ca-certificates/1882.pem (1338 bytes)
	I0913 17:15:48.913696    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem --> /usr/share/ca-certificates/18822.pem (1708 bytes)
	I0913 17:15:48.920658    5271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 17:15:48.925932    5271 ssh_runner.go:195] Run: openssl version
	I0913 17:15:48.927786    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 17:15:48.930676    5271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 17:15:48.932130    5271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0913 17:15:48.932153    5271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 17:15:48.934046    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 17:15:48.937199    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1882.pem && ln -fs /usr/share/ca-certificates/1882.pem /etc/ssl/certs/1882.pem"
	I0913 17:15:48.940501    5271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1882.pem
	I0913 17:15:48.941872    5271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:41 /usr/share/ca-certificates/1882.pem
	I0913 17:15:48.941892    5271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1882.pem
	I0913 17:15:48.943628    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1882.pem /etc/ssl/certs/51391683.0"
	I0913 17:15:48.946572    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18822.pem && ln -fs /usr/share/ca-certificates/18822.pem /etc/ssl/certs/18822.pem"
	I0913 17:15:48.949378    5271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18822.pem
	I0913 17:15:48.950768    5271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:41 /usr/share/ca-certificates/18822.pem
	I0913 17:15:48.950790    5271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18822.pem
	I0913 17:15:48.952651    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18822.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 17:15:48.955956    5271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 17:15:48.957565    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 17:15:48.959696    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 17:15:48.961627    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 17:15:48.963595    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 17:15:48.965567    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 17:15:48.967424    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 17:15:48.969275    5271 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 17:15:48.969358    5271 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 17:15:48.979400    5271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 17:15:48.982338    5271 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 17:15:48.982343    5271 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 17:15:48.982368    5271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 17:15:48.985794    5271 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 17:15:48.986081    5271 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-434000" does not appear in /Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:15:48.986179    5271 kubeconfig.go:62] /Users/jenkins/minikube-integration/19640-1360/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-434000" cluster setting kubeconfig missing "stopped-upgrade-434000" context setting]
	I0913 17:15:48.986399    5271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/kubeconfig: {Name:mke2b016812cedc34ffbfc79dbc5c22d8c43c377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:15:48.986850    5271 kapi.go:59] client config for stopped-upgrade-434000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.key", CAFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102685800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 17:15:48.987175    5271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 17:15:48.989908    5271 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-434000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0913 17:15:48.989916    5271 kubeadm.go:1160] stopping kube-system containers ...
	I0913 17:15:48.989964    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 17:15:49.000885    5271 docker.go:483] Stopping containers: [82408eec4148 5a0624279b19 bae4a9a1e6b5 b1a82bf46d1b c4642c4570af 4396aa229875 3e54a98c5ad8 6920b725f6d5]
	I0913 17:15:49.000960    5271 ssh_runner.go:195] Run: docker stop 82408eec4148 5a0624279b19 bae4a9a1e6b5 b1a82bf46d1b c4642c4570af 4396aa229875 3e54a98c5ad8 6920b725f6d5
	I0913 17:15:49.011910    5271 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 17:15:49.017833    5271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 17:15:49.020997    5271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 17:15:49.021003    5271 kubeadm.go:157] found existing configuration files:
	
	I0913 17:15:49.021029    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0913 17:15:49.024152    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 17:15:49.024183    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 17:15:49.027224    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0913 17:15:49.029551    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 17:15:49.029571    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 17:15:49.032546    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0913 17:15:49.035606    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 17:15:49.035629    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 17:15:49.038166    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0913 17:15:49.040782    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 17:15:49.040802    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 17:15:49.043776    5271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 17:15:49.046559    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.068163    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.478357    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.599806    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.625758    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.648497    5271 api_server.go:52] waiting for apiserver process to appear ...
	I0913 17:15:49.648579    5271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:15:50.952511    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:50.150663    5271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:15:50.650628    5271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:15:50.654753    5271 api_server.go:72] duration metric: took 1.006272167s to wait for apiserver process to appear ...
	I0913 17:15:50.654764    5271 api_server.go:88] waiting for apiserver healthz status ...
	I0913 17:15:50.654772    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:55.954622    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:55.954717    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:15:55.965383    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:15:55.965468    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:15:55.977021    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:15:55.977102    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:15:55.988327    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:15:55.988415    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:15:55.998704    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:15:55.998786    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:15:56.009610    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:15:56.009692    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:15:56.020480    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:15:56.020566    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:15:56.031373    5124 logs.go:276] 0 containers: []
	W0913 17:15:56.031384    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:15:56.031445    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:15:56.041989    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:15:56.042009    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:15:56.042027    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:15:56.055735    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:15:56.055745    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:15:56.070332    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:15:56.070343    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:15:56.084255    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:15:56.084269    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:15:56.119192    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:15:56.119203    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:15:56.157493    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:15:56.157504    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:15:56.170588    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:15:56.170599    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:15:56.188306    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:15:56.188318    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:15:56.210722    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:15:56.210734    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:15:56.222405    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:15:56.222416    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:15:56.227023    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:15:56.227030    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:15:56.246467    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:15:56.246481    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:15:56.262945    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:15:56.262956    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:15:56.275102    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:15:56.275113    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:15:56.296338    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:15:56.296349    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:15:56.307358    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:15:56.307369    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:15:56.323332    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:15:56.323343    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:15:58.836817    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:55.655415    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:55.655472    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:03.838936    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:03.839052    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:16:03.850462    5124 logs.go:276] 2 containers: [e5f93769610c 428c1a8c245e]
	I0913 17:16:03.850558    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:16:03.860772    5124 logs.go:276] 2 containers: [eeedd70c24f1 ebfad5ea78f0]
	I0913 17:16:03.860865    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:16:03.871106    5124 logs.go:276] 1 containers: [0a7efddc3787]
	I0913 17:16:03.871189    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:16:03.881892    5124 logs.go:276] 2 containers: [9898ac48152d 1d68fc4833c0]
	I0913 17:16:03.881979    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:16:03.894649    5124 logs.go:276] 1 containers: [d8fe5f014a5c]
	I0913 17:16:03.894734    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:16:03.905181    5124 logs.go:276] 2 containers: [c11c17dd636f 8f162a672acf]
	I0913 17:16:03.905282    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:16:03.917072    5124 logs.go:276] 0 containers: []
	W0913 17:16:03.917085    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:16:03.917158    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:16:03.927520    5124 logs.go:276] 2 containers: [d3b8610023ca 0810e822b4ee]
	I0913 17:16:03.927538    5124 logs.go:123] Gathering logs for kube-apiserver [428c1a8c245e] ...
	I0913 17:16:03.927544    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 428c1a8c245e"
	I0913 17:16:03.953605    5124 logs.go:123] Gathering logs for coredns [0a7efddc3787] ...
	I0913 17:16:03.953615    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a7efddc3787"
	I0913 17:16:03.964957    5124 logs.go:123] Gathering logs for storage-provisioner [d3b8610023ca] ...
	I0913 17:16:03.964968    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3b8610023ca"
	I0913 17:16:03.976261    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:16:03.976278    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:16:04.011031    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:16:04.011039    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:16:04.015218    5124 logs.go:123] Gathering logs for kube-controller-manager [c11c17dd636f] ...
	I0913 17:16:04.015226    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c11c17dd636f"
	I0913 17:16:04.032249    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:16:04.032260    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:16:04.046220    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:16:04.046232    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:16:04.081084    5124 logs.go:123] Gathering logs for kube-apiserver [e5f93769610c] ...
	I0913 17:16:04.081094    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5f93769610c"
	I0913 17:16:04.095830    5124 logs.go:123] Gathering logs for etcd [ebfad5ea78f0] ...
	I0913 17:16:04.095841    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebfad5ea78f0"
	I0913 17:16:04.110420    5124 logs.go:123] Gathering logs for kube-scheduler [1d68fc4833c0] ...
	I0913 17:16:04.110430    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d68fc4833c0"
	I0913 17:16:00.655926    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:00.655970    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:04.125197    5124 logs.go:123] Gathering logs for kube-proxy [d8fe5f014a5c] ...
	I0913 17:16:04.125209    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8fe5f014a5c"
	I0913 17:16:04.137072    5124 logs.go:123] Gathering logs for kube-controller-manager [8f162a672acf] ...
	I0913 17:16:04.137081    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f162a672acf"
	I0913 17:16:04.151833    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:16:04.151844    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:16:04.175070    5124 logs.go:123] Gathering logs for etcd [eeedd70c24f1] ...
	I0913 17:16:04.175081    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eeedd70c24f1"
	I0913 17:16:04.188964    5124 logs.go:123] Gathering logs for kube-scheduler [9898ac48152d] ...
	I0913 17:16:04.188974    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9898ac48152d"
	I0913 17:16:04.206209    5124 logs.go:123] Gathering logs for storage-provisioner [0810e822b4ee] ...
	I0913 17:16:04.206219    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0810e822b4ee"
	I0913 17:16:06.720047    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:05.656400    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:05.656446    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:11.722404    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:11.722597    5124 kubeadm.go:597] duration metric: took 4m4.417538166s to restartPrimaryControlPlane
	W0913 17:16:11.722743    5124 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 17:16:11.722807    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0913 17:16:12.756270    5124 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.03346425s)
	I0913 17:16:12.756348    5124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 17:16:12.761468    5124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 17:16:12.764262    5124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 17:16:12.766969    5124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 17:16:12.766976    5124 kubeadm.go:157] found existing configuration files:
	
	I0913 17:16:12.766999    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/admin.conf
	I0913 17:16:12.770109    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 17:16:12.770132    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 17:16:12.773417    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/kubelet.conf
	I0913 17:16:12.776440    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 17:16:12.776470    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 17:16:12.778919    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/controller-manager.conf
	I0913 17:16:12.781931    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 17:16:12.781962    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 17:16:12.784893    5124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/scheduler.conf
	I0913 17:16:12.787213    5124 kubeadm.go:163] "https://control-plane.minikube.internal:50289" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50289 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 17:16:12.787239    5124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 17:16:12.790047    5124 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 17:16:12.808432    5124 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0913 17:16:12.808477    5124 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 17:16:12.856279    5124 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 17:16:12.856343    5124 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 17:16:12.856398    5124 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 17:16:12.911617    5124 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 17:16:12.916760    5124 out.go:235]   - Generating certificates and keys ...
	I0913 17:16:12.916793    5124 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 17:16:12.916819    5124 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 17:16:12.916850    5124 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 17:16:12.916879    5124 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 17:16:12.916919    5124 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 17:16:12.916947    5124 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 17:16:12.916978    5124 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 17:16:12.917011    5124 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 17:16:12.917048    5124 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 17:16:12.917087    5124 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 17:16:12.917107    5124 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 17:16:12.917131    5124 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 17:16:13.108077    5124 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 17:16:13.192549    5124 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 17:16:13.475502    5124 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 17:16:13.511418    5124 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 17:16:13.546548    5124 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 17:16:13.547575    5124 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 17:16:13.547633    5124 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 17:16:13.630974    5124 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 17:16:13.635180    5124 out.go:235]   - Booting up control plane ...
	I0913 17:16:13.635230    5124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 17:16:13.635369    5124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 17:16:13.635413    5124 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 17:16:13.635489    5124 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 17:16:13.636105    5124 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 17:16:10.656692    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:10.656722    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:18.138796    5124 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502246 seconds
	I0913 17:16:18.138866    5124 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 17:16:18.142560    5124 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 17:16:18.657321    5124 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 17:16:18.657583    5124 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-714000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 17:16:19.163606    5124 kubeadm.go:310] [bootstrap-token] Using token: o2d0nq.2impy11oz3kcah35
	I0913 17:16:19.166121    5124 out.go:235]   - Configuring RBAC rules ...
	I0913 17:16:19.166244    5124 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 17:16:19.166292    5124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 17:16:19.173414    5124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 17:16:19.174664    5124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 17:16:19.175777    5124 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 17:16:19.177216    5124 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 17:16:19.180504    5124 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 17:16:19.353068    5124 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 17:16:19.567792    5124 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 17:16:19.568271    5124 kubeadm.go:310] 
	I0913 17:16:19.568303    5124 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 17:16:19.568306    5124 kubeadm.go:310] 
	I0913 17:16:19.568340    5124 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 17:16:19.568343    5124 kubeadm.go:310] 
	I0913 17:16:19.568354    5124 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 17:16:19.568389    5124 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 17:16:19.568438    5124 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 17:16:19.568442    5124 kubeadm.go:310] 
	I0913 17:16:19.568467    5124 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 17:16:19.568472    5124 kubeadm.go:310] 
	I0913 17:16:19.568522    5124 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 17:16:19.568526    5124 kubeadm.go:310] 
	I0913 17:16:19.568569    5124 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 17:16:19.568606    5124 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 17:16:19.568659    5124 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 17:16:19.568667    5124 kubeadm.go:310] 
	I0913 17:16:19.568732    5124 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 17:16:19.568781    5124 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 17:16:19.568788    5124 kubeadm.go:310] 
	I0913 17:16:19.568830    5124 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2d0nq.2impy11oz3kcah35 \
	I0913 17:16:19.568880    5124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 \
	I0913 17:16:19.568892    5124 kubeadm.go:310] 	--control-plane 
	I0913 17:16:19.568894    5124 kubeadm.go:310] 
	I0913 17:16:19.568931    5124 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 17:16:19.568934    5124 kubeadm.go:310] 
	I0913 17:16:19.569026    5124 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2d0nq.2impy11oz3kcah35 \
	I0913 17:16:19.569087    5124 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 
	I0913 17:16:19.569151    5124 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 17:16:19.569158    5124 cni.go:84] Creating CNI manager for ""
	I0913 17:16:19.569166    5124 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:16:19.573342    5124 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 17:16:19.583384    5124 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 17:16:19.586469    5124 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 17:16:19.591155    5124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 17:16:19.591206    5124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 17:16:19.591229    5124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-714000 minikube.k8s.io/updated_at=2024_09_13T17_16_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=running-upgrade-714000 minikube.k8s.io/primary=true
	I0913 17:16:19.632250    5124 ops.go:34] apiserver oom_adj: -16
	I0913 17:16:19.632264    5124 kubeadm.go:1113] duration metric: took 41.103292ms to wait for elevateKubeSystemPrivileges
	I0913 17:16:19.632269    5124 kubeadm.go:394] duration metric: took 4m12.34115075s to StartCluster
	I0913 17:16:19.632279    5124 settings.go:142] acquiring lock: {Name:mk948e653988f014de7183ca44ad61265c2dc06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:16:19.632376    5124 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:16:19.632770    5124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/kubeconfig: {Name:mke2b016812cedc34ffbfc79dbc5c22d8c43c377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:16:19.632946    5124 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:16:19.632957    5124 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 17:16:19.632994    5124 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-714000"
	I0913 17:16:19.633010    5124 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-714000"
	W0913 17:16:19.633014    5124 addons.go:243] addon storage-provisioner should already be in state true
	I0913 17:16:19.633025    5124 host.go:66] Checking if "running-upgrade-714000" exists ...
	I0913 17:16:19.633036    5124 config.go:182] Loaded profile config "running-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:16:19.633042    5124 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-714000"
	I0913 17:16:19.633071    5124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-714000"
	I0913 17:16:19.633907    5124 kapi.go:59] client config for running-upgrade-714000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/running-upgrade-714000/client.key", CAFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105a69800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 17:16:19.634030    5124 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-714000"
	W0913 17:16:19.634035    5124 addons.go:243] addon default-storageclass should already be in state true
	I0913 17:16:19.634043    5124 host.go:66] Checking if "running-upgrade-714000" exists ...
	I0913 17:16:19.637300    5124 out.go:177] * Verifying Kubernetes components...
	I0913 17:16:19.637605    5124 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 17:16:19.641565    5124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 17:16:19.641582    5124 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/running-upgrade-714000/id_rsa Username:docker}
	I0913 17:16:19.645332    5124 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:16:15.656949    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:15.656974    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:19.649262    5124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:16:19.653297    5124 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 17:16:19.653304    5124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 17:16:19.653310    5124 sshutil.go:53] new ssh client: &{IP:localhost Port:50257 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/running-upgrade-714000/id_rsa Username:docker}
	I0913 17:16:19.740415    5124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 17:16:19.745175    5124 api_server.go:52] waiting for apiserver process to appear ...
	I0913 17:16:19.745215    5124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:16:19.749276    5124 api_server.go:72] duration metric: took 116.321291ms to wait for apiserver process to appear ...
	I0913 17:16:19.749285    5124 api_server.go:88] waiting for apiserver healthz status ...
	I0913 17:16:19.749292    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:19.763534    5124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 17:16:19.789442    5124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 17:16:20.098140    5124 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 17:16:20.098153    5124 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 17:16:20.657285    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:20.657339    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:24.751306    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:24.751348    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:25.657855    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:25.658032    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:29.751673    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:29.751699    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:30.659064    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:30.659089    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:34.752010    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:34.752049    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:35.660007    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:35.660068    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:39.752490    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:39.752536    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:40.661225    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:40.661252    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:44.753114    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:44.753153    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:45.662695    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:45.662715    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:49.753946    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:49.754004    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0913 17:16:50.099914    5124 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0913 17:16:50.106071    5124 out.go:177] * Enabled addons: storage-provisioner
	I0913 17:16:50.117026    5124 addons.go:510] duration metric: took 30.484527375s for enable addons: enabled=[storage-provisioner]
	I0913 17:16:50.664279    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:50.664454    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:16:50.679449    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:16:50.679539    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:16:50.691529    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:16:50.691615    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:16:50.701973    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:16:50.702044    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:16:50.712877    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:16:50.712965    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:16:50.724306    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:16:50.724403    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:16:50.737511    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:16:50.737586    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:16:50.747957    5271 logs.go:276] 0 containers: []
	W0913 17:16:50.747971    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:16:50.748043    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:16:50.758914    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:16:50.758931    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:16:50.758936    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:16:50.773077    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:16:50.773087    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:16:50.784619    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:16:50.784630    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:16:50.799227    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:16:50.799237    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:16:50.811236    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:16:50.811245    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:16:50.828772    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:16:50.828783    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:16:50.842463    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:16:50.842476    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:16:50.853502    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:16:50.853516    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:16:50.865761    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:16:50.865773    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:16:50.942688    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:16:50.942708    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:16:50.956622    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:16:50.956632    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:16:50.983336    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:16:50.983347    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:16:50.998820    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:16:50.998834    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:16:51.009969    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:16:51.009983    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:16:51.050166    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:16:51.050177    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:16:51.054375    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:16:51.054383    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:16:53.580287    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:54.754589    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:54.754642    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:58.582442    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:58.582674    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:16:58.599573    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:16:58.599683    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:16:58.612114    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:16:58.612206    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:16:58.622926    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:16:58.623012    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:16:58.634026    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:16:58.634140    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:16:58.644877    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:16:58.644955    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:16:58.655561    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:16:58.655645    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:16:58.666229    5271 logs.go:276] 0 containers: []
	W0913 17:16:58.666242    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:16:58.666313    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:16:58.680873    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:16:58.680887    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:16:58.680893    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:16:58.704692    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:16:58.704704    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:16:58.718570    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:16:58.718580    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:16:58.732283    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:16:58.732297    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:16:58.748365    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:16:58.748375    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:16:58.773177    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:16:58.773186    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:16:58.811978    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:16:58.811985    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:16:58.825684    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:16:58.825696    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:16:58.829842    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:16:58.829852    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:16:58.865749    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:16:58.865761    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:16:58.880101    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:16:58.880114    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:16:58.892626    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:16:58.892642    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:16:58.906386    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:16:58.906399    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:16:58.918036    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:16:58.918051    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:16:58.932649    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:16:58.932660    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:16:58.949778    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:16:58.949789    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:16:59.755814    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:59.755867    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:01.463116    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:04.757423    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:04.757457    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:06.465450    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:06.465728    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:06.489839    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:06.489978    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:06.506398    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:06.506507    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:06.519091    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:06.519179    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:06.530774    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:06.530857    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:06.541686    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:06.541759    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:06.552671    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:06.552753    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:06.563452    5271 logs.go:276] 0 containers: []
	W0913 17:17:06.563463    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:06.563535    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:06.574160    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:06.574180    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:06.574186    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:06.613327    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:06.613337    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:06.626944    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:06.626959    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:06.644505    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:06.644516    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:06.655946    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:06.655956    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:06.668613    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:06.668623    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:06.679965    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:06.679975    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:06.694458    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:06.694470    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:06.706423    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:06.706433    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:06.720774    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:06.720785    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:06.748172    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:06.748187    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:06.760009    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:06.760021    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:06.785755    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:06.785767    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:06.797146    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:06.797158    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:06.801871    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:06.801878    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:06.839020    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:06.839031    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:09.359167    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:09.759357    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:09.759381    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:14.361381    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:14.361496    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:14.372821    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:14.372902    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:14.383845    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:14.383928    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:14.398675    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:14.398747    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:14.408955    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:14.409024    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:14.419215    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:14.419296    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:14.429958    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:14.430037    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:14.440407    5271 logs.go:276] 0 containers: []
	W0913 17:17:14.440419    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:14.440495    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:14.451323    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:14.451341    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:14.451347    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:14.465571    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:14.465585    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:14.498160    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:14.498174    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:14.513256    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:14.513267    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:14.526021    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:14.526035    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:14.550619    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:14.550628    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:14.562295    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:14.562305    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:14.566816    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:14.566824    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:14.601248    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:14.601259    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:14.612609    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:14.612620    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:14.624046    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:14.624061    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:14.642122    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:14.642132    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:14.653574    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:14.653586    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:14.691504    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:14.691516    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:14.705167    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:14.705180    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:14.719991    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:14.720002    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:14.761498    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:14.761517    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:17.233668    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:19.762689    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:19.762820    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:19.774081    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:19.774165    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:19.785003    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:19.785084    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:19.795592    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:19.795678    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:19.805947    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:19.806028    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:19.817077    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:19.817157    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:19.827052    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:19.827129    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:19.837669    5124 logs.go:276] 0 containers: []
	W0913 17:17:19.837681    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:19.837747    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:19.848374    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:19.848390    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:19.848396    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:19.887153    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:19.887167    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:19.925318    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:19.925329    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:19.939512    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:19.939524    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:19.953632    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:19.953642    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:19.965511    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:19.965524    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:19.985908    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:19.985919    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:20.002048    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:20.002059    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:20.013944    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:20.013959    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:20.018803    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:20.018809    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:20.030428    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:20.030439    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:20.045530    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:20.045547    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:20.057519    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:20.057533    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:22.582679    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:22.235981    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:22.236457    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:22.269753    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:22.269909    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:22.289667    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:22.289786    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:22.304591    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:22.304690    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:22.316717    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:22.316800    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:22.326890    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:22.326975    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:22.339241    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:22.339325    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:22.349461    5271 logs.go:276] 0 containers: []
	W0913 17:17:22.349473    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:22.349539    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:22.360060    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:22.360079    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:22.360085    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:22.375008    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:22.375018    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:22.395016    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:22.395026    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:22.420500    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:22.420511    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:22.432395    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:22.432405    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:22.444087    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:22.444100    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:22.478801    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:22.478813    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:22.492081    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:22.492093    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:22.504079    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:22.504092    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:22.522516    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:22.522529    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:22.540258    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:22.540272    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:22.580054    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:22.580070    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:22.585111    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:22.585120    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:22.600366    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:22.600380    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:22.611745    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:22.611757    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:22.626232    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:22.626243    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:27.583643    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:27.583775    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:27.598557    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:27.598651    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:27.609560    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:27.609647    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:27.620120    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:27.620202    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:27.630640    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:27.630717    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:27.641073    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:27.641158    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:27.651925    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:27.652000    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:27.662313    5124 logs.go:276] 0 containers: []
	W0913 17:17:27.662325    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:27.662399    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:27.673042    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:27.673061    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:27.673067    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:27.688405    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:27.688419    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:27.703001    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:27.703012    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:27.714445    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:27.714460    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:27.731772    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:27.731783    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:27.743300    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:27.743311    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:27.754866    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:27.754876    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:27.791952    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:27.791961    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:27.797029    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:27.797038    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:27.835453    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:27.835469    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:27.849656    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:27.849667    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:27.863668    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:27.863678    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:27.875671    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:27.875682    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:25.152069    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:30.402131    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:30.154293    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:30.154472    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:30.167830    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:30.167926    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:30.179663    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:30.179749    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:30.189960    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:30.190045    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:30.200167    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:30.200253    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:30.215884    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:30.215973    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:30.226389    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:30.226474    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:30.236629    5271 logs.go:276] 0 containers: []
	W0913 17:17:30.236642    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:30.236711    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:30.247427    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:30.247446    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:30.247452    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:30.261308    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:30.261322    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:30.272503    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:30.272515    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:30.289620    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:30.289632    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:30.304337    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:30.304350    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:30.340623    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:30.340635    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:30.354640    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:30.354653    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:30.384130    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:30.384141    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:30.398000    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:30.398011    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:30.412569    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:30.412579    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:30.424222    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:30.424232    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:30.436412    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:30.436428    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:30.447845    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:30.447859    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:30.470666    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:30.470674    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:30.507367    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:30.507378    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:30.511335    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:30.511341    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:33.025702    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:35.404266    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:35.404527    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:35.429592    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:35.429765    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:35.450701    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:35.450808    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:35.466238    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:35.466330    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:35.476834    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:35.476919    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:35.487676    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:35.487763    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:35.500545    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:35.500622    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:35.510821    5124 logs.go:276] 0 containers: []
	W0913 17:17:35.510833    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:35.510892    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:35.521234    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:35.521247    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:35.521252    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:35.535061    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:35.535078    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:35.549718    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:35.549729    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:35.563171    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:35.563182    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:35.574585    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:35.574599    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:35.592173    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:35.592184    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:35.603386    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:35.603398    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:35.608018    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:35.608025    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:35.623636    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:35.623647    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:35.639059    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:35.639069    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:35.650584    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:35.650595    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:35.674092    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:35.674102    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:35.712512    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:35.712520    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:38.255610    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:38.026331    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:38.026596    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:38.048389    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:38.048505    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:38.063960    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:38.064063    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:38.076706    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:38.076794    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:38.088622    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:38.088704    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:38.103848    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:38.103918    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:38.114100    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:38.114190    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:38.124977    5271 logs.go:276] 0 containers: []
	W0913 17:17:38.124991    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:38.125051    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:38.135403    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:38.135424    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:38.135429    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:38.149937    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:38.149948    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:38.167939    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:38.167950    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:38.180022    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:38.180033    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:38.218284    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:38.218292    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:38.229603    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:38.229615    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:38.241397    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:38.241409    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:38.255323    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:38.255335    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:38.282924    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:38.282939    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:38.295448    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:38.295459    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:38.318299    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:38.318307    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:38.322572    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:38.322579    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:38.356861    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:38.356872    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:38.370882    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:38.370893    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:38.385355    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:38.385371    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:38.397155    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:38.397164    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:43.257123    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:43.257311    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:43.274355    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:43.274458    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:43.287171    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:43.287264    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:43.298205    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:43.298294    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:43.308505    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:43.308588    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:43.318991    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:43.319074    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:43.330010    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:43.330090    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:43.340338    5124 logs.go:276] 0 containers: []
	W0913 17:17:43.340349    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:43.340418    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:43.351158    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:43.351174    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:43.351180    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:43.363080    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:43.363092    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:43.375199    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:43.375212    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:43.387724    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:43.387736    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:43.411774    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:43.411784    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:43.436413    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:43.436425    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:43.474932    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:43.474943    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:43.510673    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:43.510685    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:43.525234    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:43.525250    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:43.539557    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:43.539570    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:43.555047    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:43.555061    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:43.566479    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:43.566492    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:43.578050    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:43.578060    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:40.918649    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:46.082755    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:45.920861    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:45.921101    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:45.936893    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:45.936986    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:45.951979    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:45.952065    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:45.963157    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:45.963244    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:45.973835    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:45.973922    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:45.984946    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:45.985029    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:45.995701    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:45.995782    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:46.005964    5271 logs.go:276] 0 containers: []
	W0913 17:17:46.005977    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:46.006049    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:46.016705    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:46.016721    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:46.016726    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:46.028615    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:46.028627    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:46.041173    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:46.041188    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:46.067346    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:46.067359    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:46.082207    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:46.082222    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:46.097216    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:46.097232    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:46.112077    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:46.112089    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:46.131283    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:46.131293    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:46.145588    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:46.145598    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:46.157398    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:46.157411    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:46.169639    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:46.169649    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:46.184144    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:46.184154    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:46.201380    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:46.201392    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:46.238818    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:46.238832    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:46.275768    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:46.275783    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:46.279754    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:46.279762    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:48.804494    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:51.084868    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:51.085048    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:51.096181    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:51.096265    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:51.107491    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:51.107574    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:51.118539    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:51.118621    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:51.130888    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:51.130960    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:51.141932    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:51.142001    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:51.152328    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:51.152398    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:51.162595    5124 logs.go:276] 0 containers: []
	W0913 17:17:51.162610    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:51.162681    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:51.173595    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:51.173613    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:51.173618    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:51.188749    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:51.188764    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:51.200596    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:51.200608    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:17:51.218289    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:51.218303    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:51.229964    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:51.229976    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:51.266968    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:51.266978    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:51.271942    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:51.271950    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:51.307729    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:51.307742    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:51.319577    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:51.319589    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:51.343131    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:51.343146    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:51.355617    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:51.355630    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:51.374703    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:51.374714    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:51.388739    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:51.388749    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:53.901720    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:53.806723    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:53.806913    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:53.824065    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:53.824154    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:53.835916    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:53.835996    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:53.846711    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:53.846789    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:53.857324    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:53.857404    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:53.868239    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:53.868327    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:53.879461    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:53.879540    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:53.890561    5271 logs.go:276] 0 containers: []
	W0913 17:17:53.890572    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:53.890638    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:53.901810    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:53.901825    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:53.901831    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:53.940121    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:53.940132    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:53.982453    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:53.982464    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:53.996833    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:53.996844    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:54.011887    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:54.011898    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:54.025634    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:54.025647    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:54.040575    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:54.040588    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:54.052118    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:54.052131    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:54.064253    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:54.064264    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:54.078170    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:54.078180    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:54.095758    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:54.095773    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:54.108095    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:54.108106    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:54.112353    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:54.112362    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:54.126144    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:54.126157    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:54.150615    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:54.150628    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:54.181653    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:54.181665    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:58.903916    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:58.904234    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:58.936520    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:17:58.936679    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:58.956003    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:17:58.956117    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:58.969695    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:17:58.969777    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:58.987792    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:17:58.987878    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:58.998719    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:17:58.998808    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:59.009750    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:17:59.009839    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:59.020329    5124 logs.go:276] 0 containers: []
	W0913 17:17:59.020339    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:59.020410    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:59.031512    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:17:59.031528    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:17:59.031533    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:17:59.043470    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:59.043481    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:59.081327    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:17:59.081340    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:17:59.100968    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:17:59.100980    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:17:59.112443    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:17:59.112456    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:17:56.696416    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:59.125724    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:17:59.125733    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:17:59.140559    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:59.140571    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:59.163348    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:17:59.163355    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:59.174742    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:59.174753    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:59.179294    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:59.179301    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:59.215147    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:17:59.215160    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:17:59.229298    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:17:59.229312    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:17:59.240628    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:17:59.240641    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:01.760309    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:01.698633    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:01.698803    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:01.712823    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:01.712903    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:01.723733    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:01.723809    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:01.734349    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:01.734434    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:01.751685    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:01.751758    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:01.761967    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:01.762034    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:01.772807    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:01.772887    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:01.782704    5271 logs.go:276] 0 containers: []
	W0913 17:18:01.782715    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:01.782782    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:01.793086    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:01.793107    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:01.793112    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:01.833654    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:01.833664    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:01.848494    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:01.848504    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:01.870297    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:01.870307    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:01.895181    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:01.895191    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:01.909766    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:01.909778    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:01.934641    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:01.934651    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:01.946112    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:01.946125    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:01.958550    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:01.958560    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:01.995244    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:01.995254    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:01.999398    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:01.999412    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:02.013411    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:02.013422    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:02.028750    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:02.028760    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:02.043627    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:02.043638    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:02.057909    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:02.057920    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:02.069625    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:02.069637    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:04.590903    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:06.762434    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:06.762636    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:06.774782    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:06.774877    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:06.788543    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:06.788640    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:06.799535    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:18:06.799623    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:06.814101    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:06.814187    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:06.824607    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:06.824687    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:06.837601    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:06.837674    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:06.848109    5124 logs.go:276] 0 containers: []
	W0913 17:18:06.848127    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:06.848201    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:06.858742    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:06.858755    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:06.858760    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:06.872923    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:06.872934    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:06.887421    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:06.887432    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:06.902295    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:06.902305    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:06.918233    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:06.918244    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:06.929410    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:06.929419    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:06.963724    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:06.963735    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:06.968230    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:06.968240    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:06.979634    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:06.979644    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:06.991195    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:06.991206    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:07.008493    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:07.008506    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:07.024102    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:07.024111    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:07.049583    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:07.049600    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:09.593053    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:09.593138    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:09.604840    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:09.604924    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:09.624285    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:09.624371    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:09.635227    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:09.635310    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:09.646039    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:09.646128    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:09.660166    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:09.660251    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:09.670868    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:09.670954    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:09.681751    5271 logs.go:276] 0 containers: []
	W0913 17:18:09.681763    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:09.681832    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:09.692127    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:09.692146    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:09.692153    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:09.729598    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:09.729606    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:09.765937    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:09.765951    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:09.779808    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:09.779822    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:09.805426    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:09.805444    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:09.820081    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:09.820096    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:09.831559    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:09.831573    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:09.854688    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:09.854697    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:09.866316    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:09.866330    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:09.882288    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:09.882299    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:09.896117    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:09.896130    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:09.907818    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:09.907831    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:09.919325    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:09.919339    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:09.923445    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:09.923451    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:09.939761    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:09.939773    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:09.959788    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:09.959801    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:09.591335    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:12.475024    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:14.593557    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:14.593861    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:14.620996    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:14.621133    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:14.637468    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:14.637566    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:14.650841    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:18:14.650926    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:14.663063    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:14.663148    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:14.674035    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:14.674122    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:14.684997    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:14.685082    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:14.695488    5124 logs.go:276] 0 containers: []
	W0913 17:18:14.695499    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:14.695572    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:14.705862    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:14.705880    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:14.705886    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:14.721628    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:14.721667    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:14.733292    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:14.733303    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:14.748215    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:14.748227    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:14.786510    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:14.786521    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:14.790919    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:14.790926    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:14.808822    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:14.808832    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:14.820402    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:14.820412    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:14.832390    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:14.832401    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:14.850606    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:14.850622    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:14.862796    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:14.862808    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:14.887672    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:14.887686    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:14.924158    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:14.924168    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:17.437332    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:17.477311    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:17.477523    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:17.498819    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:17.498932    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:17.513639    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:17.513737    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:17.526193    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:17.526311    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:17.537566    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:17.537652    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:17.548249    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:17.548332    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:17.559482    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:17.559566    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:17.570323    5271 logs.go:276] 0 containers: []
	W0913 17:18:17.570337    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:17.570404    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:17.581676    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:17.581694    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:17.581699    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:17.618243    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:17.618255    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:17.633331    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:17.633343    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:17.644744    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:17.644758    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:17.656267    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:17.656278    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:17.679364    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:17.679374    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:17.705095    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:17.705110    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:17.717027    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:17.717039    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:17.734491    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:17.734502    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:17.770638    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:17.770649    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:17.785286    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:17.785297    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:17.799638    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:17.799653    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:17.803926    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:17.803933    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:17.820961    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:17.820972    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:17.840450    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:17.840463    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:17.853511    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:17.853525    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:22.439668    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:22.439862    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:22.455358    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:22.455447    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:22.467584    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:22.467673    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:22.478277    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:18:22.478352    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:22.492260    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:22.492342    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:22.502977    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:22.503068    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:22.513746    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:22.513826    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:22.523626    5124 logs.go:276] 0 containers: []
	W0913 17:18:22.523642    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:22.523710    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:22.534175    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:22.534192    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:22.534198    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:22.552160    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:22.552176    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:22.563523    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:22.563539    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:22.575234    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:22.575246    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:22.592435    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:22.592448    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:22.629869    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:22.629882    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:22.634376    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:22.634384    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:22.669902    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:22.669912    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:22.685514    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:22.685524    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:22.701868    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:22.701884    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:22.713590    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:22.713602    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:22.725040    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:22.725051    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:22.740791    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:22.740802    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:20.367819    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:25.267630    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:25.369967    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:25.370139    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:25.387789    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:25.387891    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:25.400148    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:25.400232    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:25.412138    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:25.412219    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:25.422254    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:25.422336    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:25.433389    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:25.433476    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:25.444883    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:25.444970    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:25.455060    5271 logs.go:276] 0 containers: []
	W0913 17:18:25.455075    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:25.455149    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:25.465258    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:25.465276    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:25.465283    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:25.477152    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:25.477164    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:25.489927    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:25.489938    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:25.504640    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:25.504653    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:25.517760    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:25.517772    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:25.531565    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:25.531578    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:25.569097    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:25.569110    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:25.583117    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:25.583129    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:25.594234    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:25.594245    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:25.618541    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:25.618551    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:25.623135    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:25.623144    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:25.657934    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:25.657949    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:25.683531    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:25.683550    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:25.697445    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:25.697456    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:25.712348    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:25.712364    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:25.723654    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:25.723664    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:28.241601    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:30.269789    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:30.269962    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:30.286047    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:30.286149    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:30.297827    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:30.297917    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:30.312328    5124 logs.go:276] 2 containers: [3636c038ac5d 80c9d19704af]
	I0913 17:18:30.312415    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:30.322359    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:30.322440    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:30.333090    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:30.333176    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:30.344397    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:30.344475    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:30.354178    5124 logs.go:276] 0 containers: []
	W0913 17:18:30.354189    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:30.354264    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:30.364477    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:30.364492    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:30.364499    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:30.379303    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:30.379314    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:30.390906    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:30.390920    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:30.405593    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:30.405604    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:30.417489    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:30.417499    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:30.440992    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:30.441003    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:30.452333    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:30.452343    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:30.475959    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:30.475972    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:30.480373    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:30.480382    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:30.492031    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:30.492044    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:30.532687    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:30.532703    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:30.547540    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:30.547555    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:30.560224    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:30.560234    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:33.100277    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:33.243883    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:33.244004    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:33.262643    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:33.262728    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:33.273280    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:33.273354    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:33.283466    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:33.283554    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:33.294358    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:33.294443    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:33.304681    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:33.304766    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:33.315797    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:33.315882    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:33.326053    5271 logs.go:276] 0 containers: []
	W0913 17:18:33.326068    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:33.326146    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:33.336374    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:33.336392    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:33.336405    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:33.351666    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:33.351675    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:33.365866    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:33.365879    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:33.380027    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:33.380039    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:33.417431    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:33.417446    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:33.429762    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:33.429773    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:33.443015    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:33.443026    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:33.456297    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:33.456312    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:33.467927    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:33.467938    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:33.493076    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:33.493086    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:33.509837    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:33.509849    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:33.534348    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:33.534357    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:33.539039    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:33.539046    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:33.553715    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:33.553727    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:33.565376    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:33.565392    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:33.576328    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:33.576338    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:38.102405    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:38.102518    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:38.115977    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:38.116070    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:38.133181    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:38.133263    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:38.143626    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:18:38.143718    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:38.154336    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:38.154424    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:38.165071    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:38.165147    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:38.181514    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:38.181596    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:38.192113    5124 logs.go:276] 0 containers: []
	W0913 17:18:38.192130    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:38.192196    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:38.203559    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:38.203584    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:38.203590    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:38.217753    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:18:38.217766    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:18:38.230409    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:38.230421    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:38.246449    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:38.246460    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:38.258395    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:38.258406    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:38.270742    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:38.270756    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:38.275780    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:18:38.275789    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:18:38.287046    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:38.287059    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:38.299316    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:38.299327    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:38.324450    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:38.324460    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:38.337822    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:38.337835    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:38.358057    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:38.358068    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:38.393769    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:38.393779    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:38.409014    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:38.409029    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:38.427415    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:38.427429    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:36.116964    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:40.968605    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:41.119247    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:41.119475    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:41.137006    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:41.137114    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:41.149758    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:41.149841    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:41.160830    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:41.160909    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:41.181502    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:41.181591    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:41.193255    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:41.193338    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:41.204020    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:41.204104    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:41.214659    5271 logs.go:276] 0 containers: []
	W0913 17:18:41.214673    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:41.214741    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:41.225251    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:41.225268    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:41.225274    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:41.229341    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:41.229347    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:41.240635    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:41.240646    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:41.253333    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:41.253344    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:41.291541    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:41.291551    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:41.306173    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:41.306184    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:41.320561    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:41.320572    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:41.334806    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:41.334820    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:41.352576    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:41.352591    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:41.364768    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:41.364780    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:41.388871    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:41.388880    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:41.400451    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:41.400465    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:41.426374    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:41.426388    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:41.438700    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:41.438712    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:41.452528    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:41.452543    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:41.468944    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:41.468955    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:44.005600    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:45.970910    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:45.971154    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:45.988472    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:45.988574    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:46.001094    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:46.001187    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:46.012316    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:18:46.012405    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:46.022964    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:46.023044    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:46.033687    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:46.033770    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:46.043885    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:46.043966    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:46.054746    5124 logs.go:276] 0 containers: []
	W0913 17:18:46.054759    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:46.054833    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:46.067694    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:46.067713    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:46.067719    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:46.081482    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:46.081494    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:46.104633    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:46.104642    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:46.140734    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:46.140744    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:46.176567    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:18:46.176578    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:18:46.188458    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:46.188469    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:46.200278    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:46.200293    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:46.219121    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:46.219132    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:46.233756    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:46.233768    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:46.247985    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:18:46.247998    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:18:46.259830    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:46.259844    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:46.284223    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:46.284237    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:46.303540    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:46.303551    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:46.315386    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:46.315397    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:46.320178    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:46.320189    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:48.834228    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:49.007854    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:49.008021    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:49.020395    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:49.020481    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:49.032707    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:49.032795    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:49.047598    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:49.047672    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:49.061060    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:49.061138    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:49.071234    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:49.071315    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:49.082132    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:49.082219    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:49.093532    5271 logs.go:276] 0 containers: []
	W0913 17:18:49.093543    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:49.093615    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:49.104062    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:49.104104    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:49.104109    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:49.143898    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:49.143910    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:49.159341    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:49.159351    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:49.177934    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:49.177948    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:49.190175    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:49.190190    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:49.206751    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:49.206761    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:49.225788    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:49.225802    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:49.237251    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:49.237262    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:49.262354    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:49.262364    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:49.280111    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:49.280122    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:49.291879    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:49.291891    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:49.316105    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:49.316113    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:49.327306    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:49.327319    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:49.364121    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:49.364132    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:49.368103    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:49.368109    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:49.382737    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:49.382752    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:53.836800    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:53.836967    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:53.853227    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:18:53.853330    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:53.867200    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:18:53.867290    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:53.878071    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:18:53.878156    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:53.888777    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:18:53.888861    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:53.900158    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:18:53.900233    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:53.910887    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:18:53.910966    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:53.921871    5124 logs.go:276] 0 containers: []
	W0913 17:18:53.921884    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:53.921956    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:53.932596    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:18:53.932615    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:53.932621    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:53.967157    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:18:53.967168    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:18:53.984973    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:18:53.984984    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:18:53.997192    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:18:53.997203    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:18:54.009299    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:18:54.009311    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:18:54.024405    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:18:54.024418    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:18:54.036708    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:18:54.036721    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:18:54.057234    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:18:54.057245    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:18:54.069023    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:54.069033    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:54.107150    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:54.107161    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:54.111908    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:18:54.111914    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:18:51.899244    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:54.125525    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:18:54.125538    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:18:54.142995    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:18:54.143011    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:18:54.154849    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:54.154862    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:54.178265    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:18:54.178275    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:56.691499    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:56.901373    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:56.901560    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:56.917131    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:56.917238    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:56.929925    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:56.930014    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:56.944790    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:56.944869    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:56.955251    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:56.955335    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:56.965266    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:56.965341    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:56.975949    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:56.976018    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:56.985596    5271 logs.go:276] 0 containers: []
	W0913 17:18:56.985608    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:56.985679    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:56.996119    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:56.996135    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:56.996140    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:57.014035    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:57.014046    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:57.026113    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:57.026125    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:57.030693    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:57.030702    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:57.044929    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:57.044939    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:57.056318    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:57.056327    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:57.074698    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:57.074713    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:57.098989    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:57.099002    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:57.110692    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:57.110704    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:57.124831    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:57.124843    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:57.149258    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:57.149270    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:57.166101    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:57.166111    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:57.202900    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:57.202907    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:57.237286    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:57.237296    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:57.252151    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:57.252163    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:57.266771    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:57.266784    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:59.780612    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:01.693802    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:01.694129    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:01.723241    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:01.723404    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:01.741385    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:01.741467    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:01.758827    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:01.758923    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:01.769726    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:01.769799    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:01.780355    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:01.780444    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:01.791074    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:01.791158    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:01.801466    5124 logs.go:276] 0 containers: []
	W0913 17:19:01.801478    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:01.801548    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:01.813142    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:01.813159    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:01.813165    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:01.828069    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:01.828080    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:01.843351    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:01.843362    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:01.847996    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:01.848002    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:01.862489    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:01.862502    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:01.875653    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:01.875664    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:01.887237    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:01.887249    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:01.923109    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:01.923124    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:01.935291    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:01.935303    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:01.959197    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:01.959210    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:01.971161    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:01.971177    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:01.988956    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:01.988968    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:02.027824    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:02.027839    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:02.042631    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:02.042642    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:02.054494    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:02.054507    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:04.783008    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:04.783436    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:04.812206    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:04.812342    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:04.829998    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:04.830111    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:04.843735    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:04.843832    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:04.855449    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:04.855523    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:04.866773    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:04.866858    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:04.877931    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:04.878009    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:04.892794    5271 logs.go:276] 0 containers: []
	W0913 17:19:04.892807    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:04.892875    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:04.903076    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:04.903093    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:04.903099    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:04.917485    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:04.917494    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:04.930765    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:04.930779    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:04.942842    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:04.942854    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:04.983034    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:04.983046    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:04.570442    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:05.004028    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:05.004040    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:05.019040    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:05.019050    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:05.031107    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:05.031119    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:05.042816    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:05.042829    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:05.080643    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:05.080654    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:05.094508    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:05.094524    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:05.119754    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:05.119765    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:05.131562    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:05.131575    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:05.149176    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:05.149186    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:05.172857    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:05.172868    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:05.176852    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:05.176862    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:07.696970    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:09.572682    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:09.572820    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:09.584637    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:09.584724    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:09.595615    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:09.595702    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:09.606827    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:09.606908    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:09.630591    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:09.630690    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:09.641459    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:09.641546    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:09.652388    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:09.652474    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:09.662600    5124 logs.go:276] 0 containers: []
	W0913 17:19:09.662612    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:09.662683    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:09.673265    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:09.673284    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:09.673290    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:09.685534    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:09.685547    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:09.699633    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:09.699644    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:09.715608    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:09.715618    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:09.726957    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:09.726968    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:09.739045    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:09.739059    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:09.778057    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:09.778070    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:09.783120    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:09.783127    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:09.795627    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:09.795642    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:09.820470    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:09.820481    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:09.832745    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:09.832762    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:09.845122    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:09.845136    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:09.860830    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:09.860859    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:09.873134    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:09.873150    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:09.890951    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:09.890961    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:12.427210    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:12.699273    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:12.699516    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:12.722004    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:12.722120    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:12.737489    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:12.737580    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:12.749605    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:12.749697    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:12.761101    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:12.761195    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:12.771420    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:12.771509    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:12.785841    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:12.785927    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:12.795932    5271 logs.go:276] 0 containers: []
	W0913 17:19:12.795946    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:12.796016    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:12.806893    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:12.806914    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:12.806921    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:12.818741    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:12.818754    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:12.842623    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:12.842633    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:12.854569    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:12.854580    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:12.894215    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:12.894254    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:12.929086    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:12.929097    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:12.933397    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:12.933404    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:12.958677    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:12.958688    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:12.973676    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:12.973687    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:12.987418    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:12.987435    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:13.001034    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:13.001046    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:13.012852    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:13.012863    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:13.029719    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:13.029730    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:13.044953    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:13.044964    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:13.070824    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:13.070839    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:13.106166    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:13.106180    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:17.429845    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:17.430194    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:17.460422    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:17.460541    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:17.477241    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:17.477359    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:17.491568    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:17.491659    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:17.503963    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:17.504041    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:17.514265    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:17.514355    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:17.525081    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:17.525163    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:17.535692    5124 logs.go:276] 0 containers: []
	W0913 17:19:17.535705    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:17.535772    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:17.546290    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:17.546310    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:17.546318    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:17.565168    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:17.565185    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:17.570213    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:17.570222    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:17.582390    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:17.582401    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:17.594567    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:17.594583    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:17.610086    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:17.610102    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:17.622019    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:17.622030    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:17.659486    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:17.659496    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:17.671286    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:17.671297    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:17.691759    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:17.691769    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:17.709314    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:17.709325    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:17.734005    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:17.734012    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:17.746003    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:17.746012    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:17.780178    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:17.780190    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:17.792819    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:17.792831    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:15.621267    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:20.313734    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:20.623499    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:20.623611    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:20.635235    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:20.635327    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:20.645850    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:20.645935    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:20.657077    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:20.657151    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:20.671564    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:20.671651    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:20.682731    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:20.682804    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:20.693460    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:20.693533    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:20.707554    5271 logs.go:276] 0 containers: []
	W0913 17:19:20.707565    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:20.707628    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:20.717555    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:20.717572    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:20.717579    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:20.733002    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:20.733011    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:20.744677    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:20.744689    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:20.756749    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:20.756761    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:20.781283    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:20.781295    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:20.795689    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:20.795703    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:20.810808    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:20.810821    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:20.828722    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:20.828732    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:20.863679    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:20.863692    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:20.877824    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:20.877837    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:20.893106    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:20.893120    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:20.915887    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:20.915896    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:20.920001    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:20.920006    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:20.933881    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:20.933895    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:20.946756    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:20.946767    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:20.958319    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:20.958332    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:23.497429    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:25.316082    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:25.316302    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:25.333332    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:25.333433    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:25.347258    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:25.347350    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:25.358923    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:25.359009    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:25.369638    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:25.369708    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:25.380774    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:25.380859    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:25.390955    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:25.391034    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:25.401395    5124 logs.go:276] 0 containers: []
	W0913 17:19:25.401405    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:25.401467    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:25.411882    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:25.411901    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:25.411908    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:25.424023    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:25.424034    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:25.448514    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:25.448538    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:25.462981    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:25.462992    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:25.477368    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:25.477381    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:25.489122    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:25.489136    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:25.501048    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:25.501060    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:25.505462    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:25.505471    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:25.520779    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:25.520791    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:25.542197    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:25.542208    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:25.560127    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:25.560139    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:25.577386    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:25.577398    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:25.612977    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:25.612988    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:25.625282    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:25.625294    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:25.636990    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:25.637001    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:28.177023    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:28.499608    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:28.499800    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:28.512969    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:28.513062    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:28.523741    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:28.523823    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:28.534993    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:28.535071    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:28.545513    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:28.545598    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:28.556225    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:28.556305    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:28.566487    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:28.566576    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:28.577205    5271 logs.go:276] 0 containers: []
	W0913 17:19:28.577217    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:28.577289    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:28.587409    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:28.587426    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:28.587431    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:28.611322    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:28.611336    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:28.623531    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:28.623544    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:28.649439    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:28.649450    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:28.663601    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:28.663610    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:28.674940    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:28.674952    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:28.687126    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:28.687137    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:28.725698    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:28.725707    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:28.760273    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:28.760288    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:28.778796    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:28.778806    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:28.791517    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:28.791527    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:28.795728    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:28.795734    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:28.822791    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:28.822802    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:28.844764    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:28.844775    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:28.859889    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:28.859902    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:28.871840    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:28.871853    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:33.178489    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:33.178632    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:33.191823    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:33.191920    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:33.210803    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:33.210879    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:33.222269    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:33.222350    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:33.232847    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:33.232917    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:33.243224    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:33.243315    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:33.261508    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:33.261590    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:33.271533    5124 logs.go:276] 0 containers: []
	W0913 17:19:33.271545    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:33.271616    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:33.282216    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:33.282234    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:33.282240    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:33.300167    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:33.300180    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:33.312098    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:33.312110    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:33.338372    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:33.338383    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:33.350673    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:33.350685    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:33.364777    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:33.364790    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:33.381204    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:33.381214    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:33.394605    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:33.394621    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:33.400058    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:33.400068    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:33.413255    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:33.413271    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:33.424906    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:33.424917    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:33.450036    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:33.450075    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:33.489056    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:33.489068    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:33.503273    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:33.503284    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:33.515163    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:33.515176    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:31.383419    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:36.054067    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:36.385717    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:36.385961    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:36.409818    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:36.409954    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:36.426310    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:36.426399    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:36.439455    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:36.439533    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:36.450847    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:36.450928    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:36.461356    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:36.461437    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:36.471697    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:36.471786    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:36.482354    5271 logs.go:276] 0 containers: []
	W0913 17:19:36.482368    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:36.482437    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:36.493197    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:36.493212    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:36.493217    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:36.504265    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:36.504278    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:36.522820    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:36.522829    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:36.546498    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:36.546511    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:36.550823    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:36.550833    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:36.585747    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:36.585761    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:36.597470    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:36.597481    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:36.634468    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:36.634482    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:36.648952    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:36.648964    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:36.661630    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:36.661641    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:36.677198    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:36.677211    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:36.689254    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:36.689269    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:36.706191    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:36.706200    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:36.721307    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:36.721318    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:36.735082    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:36.735096    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:36.749856    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:36.749870    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:39.276660    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:41.056407    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:41.056705    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:41.082729    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:41.082851    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:41.098983    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:41.099083    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:41.114515    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:41.114600    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:41.126066    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:41.126137    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:41.138075    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:41.138146    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:41.149573    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:41.149661    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:41.159471    5124 logs.go:276] 0 containers: []
	W0913 17:19:41.159486    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:41.159559    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:41.171865    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:41.171886    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:41.171892    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:41.176524    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:41.176531    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:41.211864    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:41.211876    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:41.224039    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:41.224052    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:41.235309    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:41.235325    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:41.246783    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:41.246794    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:41.265194    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:41.265207    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:41.279259    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:41.279273    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:41.293120    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:41.293131    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:41.305223    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:41.305234    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:41.319914    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:41.319929    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:41.331315    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:41.331326    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:41.370678    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:41.370689    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:41.382558    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:41.382572    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:41.400477    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:41.400488    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:43.927143    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:44.278901    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:44.279139    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:44.301754    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:44.301879    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:44.317480    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:44.317578    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:44.329690    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:44.329772    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:44.341615    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:44.341700    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:44.352416    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:44.352495    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:44.363893    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:44.363974    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:44.373922    5271 logs.go:276] 0 containers: []
	W0913 17:19:44.373933    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:44.373999    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:44.384649    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:44.384667    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:44.384673    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:44.388909    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:44.388919    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:44.400118    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:44.400130    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:44.412035    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:44.412050    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:44.428640    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:44.428650    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:44.440495    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:44.440506    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:44.454308    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:44.454319    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:44.477697    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:44.477705    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:44.491822    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:44.491832    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:44.530935    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:44.530948    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:44.545743    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:44.545754    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:44.559460    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:44.559472    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:44.594069    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:44.594079    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:44.608382    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:44.608398    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:44.621732    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:44.621748    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:44.648738    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:44.648757    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:48.929053    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:48.929263    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:48.949241    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:48.949354    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:48.964417    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:48.964523    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:48.976868    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:48.976957    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:48.987456    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:48.987546    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:48.998773    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:48.998853    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:49.017249    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:49.017329    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:49.027600    5124 logs.go:276] 0 containers: []
	W0913 17:19:49.027613    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:49.027684    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:49.039155    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:49.039183    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:49.039188    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:49.050522    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:49.050533    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:49.066881    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:49.066893    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:49.090566    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:49.090577    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:47.169489    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:52.171782    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:52.171844    5271 kubeadm.go:597] duration metric: took 4m3.193140333s to restartPrimaryControlPlane
	W0913 17:19:52.171892    5271 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 17:19:52.171915    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0913 17:19:53.198581    5271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.026669916s)
	I0913 17:19:53.198654    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 17:19:53.203661    5271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 17:19:53.206529    5271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 17:19:53.209378    5271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 17:19:53.209384    5271 kubeadm.go:157] found existing configuration files:
	
	I0913 17:19:53.209416    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0913 17:19:53.211955    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 17:19:53.211980    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 17:19:53.214880    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0913 17:19:53.217666    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 17:19:53.217695    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 17:19:53.220244    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0913 17:19:53.222777    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 17:19:53.222800    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 17:19:53.225729    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0913 17:19:53.228212    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 17:19:53.228236    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 17:19:53.230831    5271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 17:19:53.249237    5271 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0913 17:19:53.249267    5271 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 17:19:53.300503    5271 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 17:19:53.300579    5271 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 17:19:53.300634    5271 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 17:19:53.348560    5271 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 17:19:53.352744    5271 out.go:235]   - Generating certificates and keys ...
	I0913 17:19:53.352779    5271 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 17:19:53.352816    5271 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 17:19:53.352858    5271 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 17:19:53.352892    5271 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 17:19:53.352957    5271 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 17:19:53.352987    5271 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 17:19:53.353018    5271 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 17:19:53.353051    5271 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 17:19:53.353095    5271 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 17:19:53.353140    5271 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 17:19:53.353160    5271 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 17:19:53.353187    5271 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 17:19:53.542641    5271 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 17:19:53.612353    5271 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 17:19:53.823333    5271 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 17:19:53.914709    5271 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 17:19:53.945786    5271 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 17:19:53.946120    5271 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 17:19:53.946144    5271 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 17:19:54.034729    5271 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 17:19:49.126227    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:49.126239    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:49.138377    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:49.138392    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:49.149941    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:49.149955    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:49.162209    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:49.162224    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:49.178165    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:49.178177    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:49.195900    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:49.195913    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:49.207290    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:49.207302    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:49.212104    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:49.212112    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:49.226170    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:49.226181    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:49.238383    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:49.238396    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:49.277447    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:49.277464    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:51.798798    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:54.038893    5271 out.go:235]   - Booting up control plane ...
	I0913 17:19:54.038937    5271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 17:19:54.040240    5271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 17:19:54.040729    5271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 17:19:54.040976    5271 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 17:19:54.041831    5271 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 17:19:56.800514    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:56.800650    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:56.812185    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:19:56.812269    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:56.824041    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:19:56.824124    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:56.837272    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:19:56.837362    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:56.848094    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:19:56.848184    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:56.859144    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:19:56.859216    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:56.870232    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:19:56.870320    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:56.885340    5124 logs.go:276] 0 containers: []
	W0913 17:19:56.885353    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:56.885430    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:56.896991    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:19:56.897009    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:56.897015    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:56.936222    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:56.936242    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:56.940987    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:56.940996    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:56.977546    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:19:56.977558    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:19:56.997123    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:19:56.997140    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:19:57.009304    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:19:57.009317    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:19:57.022665    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:19:57.022677    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:19:57.039337    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:57.039348    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:57.065334    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:19:57.065347    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:19:57.081318    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:19:57.081335    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:19:57.094574    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:19:57.094589    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:19:57.107468    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:19:57.107481    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:19:57.124618    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:19:57.124633    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:19:57.144524    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:19:57.144542    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:19:57.156836    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:19:57.156848    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:58.543743    5271 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501712 seconds
	I0913 17:19:58.543804    5271 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 17:19:58.547465    5271 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 17:19:59.061848    5271 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 17:19:59.062015    5271 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-434000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 17:19:59.569779    5271 kubeadm.go:310] [bootstrap-token] Using token: 979w3e.9if25wzhtorqg6a9
	I0913 17:19:59.573413    5271 out.go:235]   - Configuring RBAC rules ...
	I0913 17:19:59.573496    5271 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 17:19:59.573551    5271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 17:19:59.576084    5271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 17:19:59.577265    5271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 17:19:59.578477    5271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 17:19:59.579842    5271 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 17:19:59.584109    5271 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 17:19:59.746378    5271 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 17:19:59.974894    5271 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 17:19:59.975378    5271 kubeadm.go:310] 
	I0913 17:19:59.975415    5271 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 17:19:59.975422    5271 kubeadm.go:310] 
	I0913 17:19:59.975473    5271 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 17:19:59.975478    5271 kubeadm.go:310] 
	I0913 17:19:59.975494    5271 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 17:19:59.975522    5271 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 17:19:59.975552    5271 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 17:19:59.975556    5271 kubeadm.go:310] 
	I0913 17:19:59.975581    5271 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 17:19:59.975584    5271 kubeadm.go:310] 
	I0913 17:19:59.975611    5271 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 17:19:59.975614    5271 kubeadm.go:310] 
	I0913 17:19:59.975649    5271 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 17:19:59.975698    5271 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 17:19:59.975737    5271 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 17:19:59.975741    5271 kubeadm.go:310] 
	I0913 17:19:59.975791    5271 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 17:19:59.975833    5271 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 17:19:59.975836    5271 kubeadm.go:310] 
	I0913 17:19:59.975883    5271 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 979w3e.9if25wzhtorqg6a9 \
	I0913 17:19:59.975939    5271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 \
	I0913 17:19:59.975950    5271 kubeadm.go:310] 	--control-plane 
	I0913 17:19:59.975954    5271 kubeadm.go:310] 
	I0913 17:19:59.976009    5271 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 17:19:59.976013    5271 kubeadm.go:310] 
	I0913 17:19:59.976065    5271 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 979w3e.9if25wzhtorqg6a9 \
	I0913 17:19:59.976129    5271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 
	I0913 17:19:59.976409    5271 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 17:19:59.976419    5271 cni.go:84] Creating CNI manager for ""
	I0913 17:19:59.976428    5271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:19:59.979635    5271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 17:19:59.982548    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 17:19:59.985613    5271 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 17:19:59.990261    5271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 17:19:59.990308    5271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 17:19:59.990388    5271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-434000 minikube.k8s.io/updated_at=2024_09_13T17_19_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=stopped-upgrade-434000 minikube.k8s.io/primary=true
	I0913 17:20:00.019783    5271 kubeadm.go:1113] duration metric: took 29.51425ms to wait for elevateKubeSystemPrivileges
	I0913 17:20:00.019798    5271 ops.go:34] apiserver oom_adj: -16
	I0913 17:20:00.029195    5271 kubeadm.go:394] duration metric: took 4m11.063679875s to StartCluster
	I0913 17:20:00.029213    5271 settings.go:142] acquiring lock: {Name:mk948e653988f014de7183ca44ad61265c2dc06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:20:00.029306    5271 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:20:00.029713    5271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/kubeconfig: {Name:mke2b016812cedc34ffbfc79dbc5c22d8c43c377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:20:00.029920    5271 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:20:00.029931    5271 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 17:20:00.029972    5271 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-434000"
	I0913 17:20:00.029983    5271 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-434000"
	W0913 17:20:00.029989    5271 addons.go:243] addon storage-provisioner should already be in state true
	I0913 17:20:00.030001    5271 host.go:66] Checking if "stopped-upgrade-434000" exists ...
	I0913 17:20:00.030030    5271 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:20:00.030030    5271 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-434000"
	I0913 17:20:00.030071    5271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-434000"
	I0913 17:20:00.030978    5271 kapi.go:59] client config for stopped-upgrade-434000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.key", CAFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102685800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 17:20:00.031105    5271 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-434000"
	W0913 17:20:00.031109    5271 addons.go:243] addon default-storageclass should already be in state true
	I0913 17:20:00.031115    5271 host.go:66] Checking if "stopped-upgrade-434000" exists ...
	I0913 17:20:00.033446    5271 out.go:177] * Verifying Kubernetes components...
	I0913 17:20:00.033824    5271 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 17:20:00.037735    5271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 17:20:00.037742    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:20:00.041381    5271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:19:59.671098    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:00.045449    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:20:00.049360    5271 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 17:20:00.049367    5271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 17:20:00.049374    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:20:00.130387    5271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 17:20:00.137736    5271 api_server.go:52] waiting for apiserver process to appear ...
	I0913 17:20:00.137793    5271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:20:00.141742    5271 api_server.go:72] duration metric: took 111.812666ms to wait for apiserver process to appear ...
	I0913 17:20:00.141753    5271 api_server.go:88] waiting for apiserver healthz status ...
	I0913 17:20:00.141760    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:00.187232    5271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 17:20:00.201972    5271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 17:20:00.522102    5271 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 17:20:00.522114    5271 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 17:20:04.671644    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:04.672161    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:20:04.708370    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:20:04.708524    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:20:04.726018    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:20:04.726126    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:20:04.740412    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:20:04.740506    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:20:04.752300    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:20:04.752371    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:20:04.763302    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:20:04.763383    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:20:04.774853    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:20:04.774938    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:20:04.788665    5124 logs.go:276] 0 containers: []
	W0913 17:20:04.788677    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:20:04.788752    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:20:04.799409    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:20:04.799431    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:20:04.799437    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:20:04.811923    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:20:04.811934    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:20:04.829639    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:20:04.829653    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:20:04.841433    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:20:04.841445    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:20:04.853923    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:20:04.853936    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:20:04.868769    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:20:04.868785    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:20:04.873415    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:20:04.873423    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:20:04.907795    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:20:04.907810    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:20:04.920454    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:20:04.920470    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:20:04.945562    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:20:04.945570    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:20:04.983134    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:20:04.983143    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:20:04.994864    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:20:04.994875    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:20:05.010687    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:20:05.010698    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:20:05.022884    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:20:05.022895    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:20:05.037651    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:20:05.037660    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:20:07.550406    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:05.141795    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:05.141816    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:12.552749    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:12.552914    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:20:12.563897    5124 logs.go:276] 1 containers: [136509bb2488]
	I0913 17:20:12.563978    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:20:12.575074    5124 logs.go:276] 1 containers: [5b963d6f284a]
	I0913 17:20:12.575158    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:20:12.586641    5124 logs.go:276] 4 containers: [bcc7346a93be 1cf00c49e05f 3636c038ac5d 80c9d19704af]
	I0913 17:20:12.586717    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:20:12.597379    5124 logs.go:276] 1 containers: [5dcbd870db4b]
	I0913 17:20:12.597465    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:20:12.608157    5124 logs.go:276] 1 containers: [d6595dc4ece7]
	I0913 17:20:12.608244    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:20:12.618891    5124 logs.go:276] 1 containers: [00ce23810812]
	I0913 17:20:12.618980    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:20:12.630060    5124 logs.go:276] 0 containers: []
	W0913 17:20:12.630074    5124 logs.go:278] No container was found matching "kindnet"
	I0913 17:20:12.630147    5124 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:20:12.640495    5124 logs.go:276] 1 containers: [6cd37f5cce2c]
	I0913 17:20:12.640512    5124 logs.go:123] Gathering logs for coredns [80c9d19704af] ...
	I0913 17:20:12.640518    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 80c9d19704af"
	I0913 17:20:12.653090    5124 logs.go:123] Gathering logs for kube-scheduler [5dcbd870db4b] ...
	I0913 17:20:12.653106    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5dcbd870db4b"
	I0913 17:20:12.668943    5124 logs.go:123] Gathering logs for kube-proxy [d6595dc4ece7] ...
	I0913 17:20:12.668955    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d6595dc4ece7"
	I0913 17:20:12.680877    5124 logs.go:123] Gathering logs for coredns [bcc7346a93be] ...
	I0913 17:20:12.680888    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bcc7346a93be"
	I0913 17:20:12.692841    5124 logs.go:123] Gathering logs for coredns [1cf00c49e05f] ...
	I0913 17:20:12.692852    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cf00c49e05f"
	I0913 17:20:12.704692    5124 logs.go:123] Gathering logs for Docker ...
	I0913 17:20:12.704702    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:20:12.728647    5124 logs.go:123] Gathering logs for kubelet ...
	I0913 17:20:12.728659    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:20:12.765969    5124 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:20:12.765978    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:20:12.801271    5124 logs.go:123] Gathering logs for etcd [5b963d6f284a] ...
	I0913 17:20:12.801286    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b963d6f284a"
	I0913 17:20:12.815937    5124 logs.go:123] Gathering logs for coredns [3636c038ac5d] ...
	I0913 17:20:12.815949    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3636c038ac5d"
	I0913 17:20:12.838930    5124 logs.go:123] Gathering logs for container status ...
	I0913 17:20:12.838942    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:20:12.850519    5124 logs.go:123] Gathering logs for dmesg ...
	I0913 17:20:12.850530    5124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:20:12.855031    5124 logs.go:123] Gathering logs for kube-apiserver [136509bb2488] ...
	I0913 17:20:12.855039    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 136509bb2488"
	I0913 17:20:12.869854    5124 logs.go:123] Gathering logs for kube-controller-manager [00ce23810812] ...
	I0913 17:20:12.869866    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00ce23810812"
	I0913 17:20:12.887945    5124 logs.go:123] Gathering logs for storage-provisioner [6cd37f5cce2c] ...
	I0913 17:20:12.887956    5124 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6cd37f5cce2c"
	I0913 17:20:10.143682    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:10.143723    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:15.402152    5124 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:15.143892    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:15.143915    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:20.404395    5124 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:20.408561    5124 out.go:201] 
	W0913 17:20:20.411327    5124 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0913 17:20:20.411337    5124 out.go:270] * 
	W0913 17:20:20.411980    5124 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:20:20.427292    5124 out.go:201] 
	I0913 17:20:20.144608    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:20.144650    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:25.145132    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:25.145172    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:30.145835    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:30.145877    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0913 17:20:30.523934    5271 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0913 17:20:30.527240    5271 out.go:177] * Enabled addons: storage-provisioner
	I0913 17:20:30.534117    5271 addons.go:510] duration metric: took 30.504648875s for enable addons: enabled=[storage-provisioner]
	
	
	==> Docker <==
	-- Journal begins at Sat 2024-09-14 00:11:33 UTC, ends at Sat 2024-09-14 00:20:36 UTC. --
	Sep 14 00:20:20 running-upgrade-714000 dockerd[2856]: time="2024-09-14T00:20:20.967733135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 14 00:20:20 running-upgrade-714000 dockerd[2856]: time="2024-09-14T00:20:20.967850797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 14 00:20:20 running-upgrade-714000 dockerd[2856]: time="2024-09-14T00:20:20.967929252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 14 00:20:20 running-upgrade-714000 dockerd[2856]: time="2024-09-14T00:20:20.968046039Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c307e7a8c5469708cc9c243afbef84cf45098b2dbaf96c4e889b7067483620a1 pid=18463 runtime=io.containerd.runc.v2
	Sep 14 00:20:21 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:21Z" level=error msg="ContainerStats resp: {0x400035a800 linux}"
	Sep 14 00:20:22 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:22Z" level=error msg="ContainerStats resp: {0x4000847f40 linux}"
	Sep 14 00:20:22 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:22Z" level=error msg="ContainerStats resp: {0x40006bd840 linux}"
	Sep 14 00:20:22 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:22Z" level=error msg="ContainerStats resp: {0x4000974700 linux}"
	Sep 14 00:20:22 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:22Z" level=error msg="ContainerStats resp: {0x40007b4780 linux}"
	Sep 14 00:20:22 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:22Z" level=error msg="ContainerStats resp: {0x40007b4f00 linux}"
	Sep 14 00:20:22 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:22Z" level=error msg="ContainerStats resp: {0x4000975b40 linux}"
	Sep 14 00:20:22 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:22Z" level=error msg="ContainerStats resp: {0x40007b54c0 linux}"
	Sep 14 00:20:22 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:22Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 14 00:20:27 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:27Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 14 00:20:32 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:32Z" level=error msg="ContainerStats resp: {0x40008463c0 linux}"
	Sep 14 00:20:32 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:32Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Sep 14 00:20:32 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:32Z" level=error msg="ContainerStats resp: {0x40007e7d80 linux}"
	Sep 14 00:20:33 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:33Z" level=error msg="ContainerStats resp: {0x40007b5040 linux}"
	Sep 14 00:20:34 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:34Z" level=error msg="ContainerStats resp: {0x40007b4940 linux}"
	Sep 14 00:20:34 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:34Z" level=error msg="ContainerStats resp: {0x40006bd880 linux}"
	Sep 14 00:20:34 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:34Z" level=error msg="ContainerStats resp: {0x40006bdd40 linux}"
	Sep 14 00:20:34 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:34Z" level=error msg="ContainerStats resp: {0x40007b58c0 linux}"
	Sep 14 00:20:34 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:34Z" level=error msg="ContainerStats resp: {0x40007b5a80 linux}"
	Sep 14 00:20:34 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:34Z" level=error msg="ContainerStats resp: {0x40007b5ec0 linux}"
	Sep 14 00:20:34 running-upgrade-714000 cri-dockerd[2694]: time="2024-09-14T00:20:34Z" level=error msg="ContainerStats resp: {0x40000b9740 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	4cdfb7c972f8e       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   6b2f39aafaf7d
	c307e7a8c5469       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   6e3db21da8364
	bcc7346a93be4       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   6e3db21da8364
	1cf00c49e05f5       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   6b2f39aafaf7d
	6cd37f5cce2c9       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   be1f9b820415c
	d6595dc4ece78       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   63e1bc0968c94
	5dcbd870db4b4       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   309d89010e2e7
	00ce23810812e       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   1a8e1469a19c2
	5b963d6f284a6       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   4075f6f868bae
	136509bb24881       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   777ce13735386
	
	
	==> coredns [1cf00c49e05f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:50639->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:42252->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:54416->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:44201->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:60884->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:42958->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:47739->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:46491->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:56782->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4901817031316738339.5081123095621217105. HINFO: read udp 10.244.0.2:38787->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4cdfb7c972f8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7889050483018388508.7876639448427934825. HINFO: read udp 10.244.0.2:55896->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7889050483018388508.7876639448427934825. HINFO: read udp 10.244.0.2:38087->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7889050483018388508.7876639448427934825. HINFO: read udp 10.244.0.2:33517->10.0.2.3:53: i/o timeout
	
	
	==> coredns [bcc7346a93be] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:34920->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:45096->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:51641->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:46515->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:36650->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:44260->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:59715->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:50097->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:35838->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6902140517169790237.1857207537069035243. HINFO: read udp 10.244.0.3:46862->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c307e7a8c546] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8818340242772391355.7365636693829217843. HINFO: read udp 10.244.0.3:58872->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8818340242772391355.7365636693829217843. HINFO: read udp 10.244.0.3:37829->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8818340242772391355.7365636693829217843. HINFO: read udp 10.244.0.3:37235->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-714000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-714000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=running-upgrade-714000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T17_16_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:16:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-714000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:20:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:16:19 +0000   Sat, 14 Sep 2024 00:16:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:16:19 +0000   Sat, 14 Sep 2024 00:16:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:16:19 +0000   Sat, 14 Sep 2024 00:16:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:16:19 +0000   Sat, 14 Sep 2024 00:16:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-714000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 892ddcc3d0c54c07b1db02a1225b3317
	  System UUID:                892ddcc3d0c54c07b1db02a1225b3317
	  Boot ID:                    016a30e3-f5d2-45ef-8311-5ccd4687ba88
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dd9gm                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-hxwsv                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-714000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-714000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-714000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-2jscx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-714000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-714000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-714000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-714000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-714000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-714000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-714000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-714000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-714000 event: Registered Node running-upgrade-714000 in Controller
	
	
	==> dmesg <==
	[  +1.641637] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.069013] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.076608] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.144218] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.082321] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.080943] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.017480] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +8.654768] systemd-fstab-generator[1937]: Ignoring "noauto" for root device
	[  +2.658868] systemd-fstab-generator[2224]: Ignoring "noauto" for root device
	[  +0.163989] systemd-fstab-generator[2257]: Ignoring "noauto" for root device
	[  +0.091403] systemd-fstab-generator[2268]: Ignoring "noauto" for root device
	[Sep14 00:12] systemd-fstab-generator[2281]: Ignoring "noauto" for root device
	[  +1.341423] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.127536] systemd-fstab-generator[2651]: Ignoring "noauto" for root device
	[  +0.085170] systemd-fstab-generator[2662]: Ignoring "noauto" for root device
	[  +0.076973] systemd-fstab-generator[2673]: Ignoring "noauto" for root device
	[  +0.080534] systemd-fstab-generator[2687]: Ignoring "noauto" for root device
	[  +2.487973] systemd-fstab-generator[2839]: Ignoring "noauto" for root device
	[  +2.277282] systemd-fstab-generator[3186]: Ignoring "noauto" for root device
	[  +0.962957] systemd-fstab-generator[3329]: Ignoring "noauto" for root device
	[ +17.706356] kauditd_printk_skb: 68 callbacks suppressed
	[Sep14 00:13] kauditd_printk_skb: 21 callbacks suppressed
	[Sep14 00:16] systemd-fstab-generator[11511]: Ignoring "noauto" for root device
	[  +5.640182] systemd-fstab-generator[12117]: Ignoring "noauto" for root device
	[  +0.463503] systemd-fstab-generator[12253]: Ignoring "noauto" for root device
	
	
	==> etcd [5b963d6f284a] <==
	{"level":"info","ts":"2024-09-14T00:16:15.245Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T00:16:15.245Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T00:16:15.245Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-09-14T00:16:15.245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-09-14T00:16:15.245Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-09-14T00:16:15.245Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-14T00:16:15.245Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-09-14T00:16:15.698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T00:16:15.698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T00:16:15.698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-09-14T00:16:15.698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:16:15.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-14T00:16:15.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-09-14T00:16:15.699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-09-14T00:16:15.699Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:16:15.700Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T00:16:15.700Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:16:15.699Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-714000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:16:15.700Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:16:15.701Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-09-14T00:16:15.701Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:16:15.701Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:16:15.706Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:16:15.706Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:16:15.706Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 00:20:36 up 9 min,  0 users,  load average: 0.13, 0.10, 0.03
	Linux running-upgrade-714000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [136509bb2488] <==
	I0914 00:16:16.901944       1 controller.go:611] quota admission added evaluator for: namespaces
	I0914 00:16:16.947112       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0914 00:16:16.948225       1 cache.go:39] Caches are synced for autoregister controller
	I0914 00:16:16.949997       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0914 00:16:16.950259       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0914 00:16:16.950408       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0914 00:16:16.952703       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0914 00:16:17.671216       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 00:16:17.850389       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0914 00:16:17.852814       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0914 00:16:17.852890       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0914 00:16:17.977289       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 00:16:17.990401       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0914 00:16:18.006215       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0914 00:16:18.008398       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0914 00:16:18.008840       1 controller.go:611] quota admission added evaluator for: endpoints
	I0914 00:16:18.010222       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 00:16:18.978243       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0914 00:16:19.336580       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0914 00:16:19.340726       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0914 00:16:19.351589       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0914 00:16:19.383560       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 00:16:31.932261       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0914 00:16:32.631424       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0914 00:16:33.040091       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [00ce23810812] <==
	I0914 00:16:31.813099       1 shared_informer.go:262] Caches are synced for expand
	I0914 00:16:31.817330       1 shared_informer.go:262] Caches are synced for PVC protection
	I0914 00:16:31.820442       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0914 00:16:31.820462       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0914 00:16:31.820531       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0914 00:16:31.820534       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0914 00:16:31.826832       1 shared_informer.go:262] Caches are synced for deployment
	I0914 00:16:31.829850       1 shared_informer.go:262] Caches are synced for disruption
	I0914 00:16:31.829882       1 disruption.go:371] Sending events to api server.
	I0914 00:16:31.830964       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0914 00:16:31.830970       1 shared_informer.go:262] Caches are synced for job
	I0914 00:16:31.830982       1 shared_informer.go:262] Caches are synced for persistent volume
	I0914 00:16:31.923682       1 shared_informer.go:262] Caches are synced for attach detach
	I0914 00:16:31.924857       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0914 00:16:31.929818       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0914 00:16:31.935440       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2jscx"
	I0914 00:16:31.984285       1 shared_informer.go:262] Caches are synced for resource quota
	I0914 00:16:32.015155       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0914 00:16:32.033187       1 shared_informer.go:262] Caches are synced for resource quota
	I0914 00:16:32.456938       1 shared_informer.go:262] Caches are synced for garbage collector
	I0914 00:16:32.529613       1 shared_informer.go:262] Caches are synced for garbage collector
	I0914 00:16:32.529626       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0914 00:16:32.632754       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0914 00:16:32.833063       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dd9gm"
	I0914 00:16:32.837108       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-hxwsv"
	
	
	==> kube-proxy [d6595dc4ece7] <==
	I0914 00:16:33.028964       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0914 00:16:33.028992       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0914 00:16:33.029003       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0914 00:16:33.037712       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0914 00:16:33.037723       1 server_others.go:206] "Using iptables Proxier"
	I0914 00:16:33.037735       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0914 00:16:33.037828       1 server.go:661] "Version info" version="v1.24.1"
	I0914 00:16:33.037832       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:16:33.038050       1 config.go:317] "Starting service config controller"
	I0914 00:16:33.038056       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0914 00:16:33.038063       1 config.go:226] "Starting endpoint slice config controller"
	I0914 00:16:33.038065       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0914 00:16:33.038303       1 config.go:444] "Starting node config controller"
	I0914 00:16:33.038306       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0914 00:16:33.138637       1 shared_informer.go:262] Caches are synced for node config
	I0914 00:16:33.138654       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0914 00:16:33.138667       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [5dcbd870db4b] <==
	W0914 00:16:16.894968       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 00:16:16.894995       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 00:16:16.895107       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:16:16.895133       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 00:16:16.895165       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 00:16:16.895180       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0914 00:16:16.895224       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 00:16:16.895246       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0914 00:16:16.895289       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 00:16:16.895309       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0914 00:16:16.895338       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 00:16:16.895353       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0914 00:16:17.732309       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 00:16:17.732622       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0914 00:16:17.736537       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:16:17.736554       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0914 00:16:17.755446       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 00:16:17.755470       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0914 00:16:17.794925       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 00:16:17.794989       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0914 00:16:17.893275       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 00:16:17.893363       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 00:16:17.906911       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 00:16:17.906988       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0914 00:16:20.089193       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sat 2024-09-14 00:11:33 UTC, ends at Sat 2024-09-14 00:20:36 UTC. --
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: I0914 00:16:31.787670   12123 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: I0914 00:16:31.883052   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/426dd427-8dbd-4eea-af0e-4866d84c0995-tmp\") pod \"storage-provisioner\" (UID: \"426dd427-8dbd-4eea-af0e-4866d84c0995\") " pod="kube-system/storage-provisioner"
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: I0914 00:16:31.883079   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkxsd\" (UniqueName: \"kubernetes.io/projected/426dd427-8dbd-4eea-af0e-4866d84c0995-kube-api-access-xkxsd\") pod \"storage-provisioner\" (UID: \"426dd427-8dbd-4eea-af0e-4866d84c0995\") " pod="kube-system/storage-provisioner"
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: I0914 00:16:31.937355   12123 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: I0914 00:16:31.983548   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/8a82408b-fba7-4fb3-8b4d-e69f68c46213-kube-api-access-zkf26\") pod \"kube-proxy-2jscx\" (UID: \"8a82408b-fba7-4fb3-8b4d-e69f68c46213\") " pod="kube-system/kube-proxy-2jscx"
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: I0914 00:16:31.983590   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a82408b-fba7-4fb3-8b4d-e69f68c46213-lib-modules\") pod \"kube-proxy-2jscx\" (UID: \"8a82408b-fba7-4fb3-8b4d-e69f68c46213\") " pod="kube-system/kube-proxy-2jscx"
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: I0914 00:16:31.983601   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a82408b-fba7-4fb3-8b4d-e69f68c46213-kube-proxy\") pod \"kube-proxy-2jscx\" (UID: \"8a82408b-fba7-4fb3-8b4d-e69f68c46213\") " pod="kube-system/kube-proxy-2jscx"
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: I0914 00:16:31.983610   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a82408b-fba7-4fb3-8b4d-e69f68c46213-xtables-lock\") pod \"kube-proxy-2jscx\" (UID: \"8a82408b-fba7-4fb3-8b4d-e69f68c46213\") " pod="kube-system/kube-proxy-2jscx"
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: E0914 00:16:31.986436   12123 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: E0914 00:16:31.986449   12123 projected.go:192] Error preparing data for projected volume kube-api-access-xkxsd for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 14 00:16:31 running-upgrade-714000 kubelet[12123]: E0914 00:16:31.986478   12123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/426dd427-8dbd-4eea-af0e-4866d84c0995-kube-api-access-xkxsd podName:426dd427-8dbd-4eea-af0e-4866d84c0995 nodeName:}" failed. No retries permitted until 2024-09-14 00:16:32.486466618 +0000 UTC m=+13.163233996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xkxsd" (UniqueName: "kubernetes.io/projected/426dd427-8dbd-4eea-af0e-4866d84c0995-kube-api-access-xkxsd") pod "storage-provisioner" (UID: "426dd427-8dbd-4eea-af0e-4866d84c0995") : configmap "kube-root-ca.crt" not found
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: E0914 00:16:32.086646   12123 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: E0914 00:16:32.086662   12123 projected.go:192] Error preparing data for projected volume kube-api-access-zkf26 for pod kube-system/kube-proxy-2jscx: configmap "kube-root-ca.crt" not found
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: E0914 00:16:32.086685   12123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/8a82408b-fba7-4fb3-8b4d-e69f68c46213-kube-api-access-zkf26 podName:8a82408b-fba7-4fb3-8b4d-e69f68c46213 nodeName:}" failed. No retries permitted until 2024-09-14 00:16:32.586676404 +0000 UTC m=+13.263443783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zkf26" (UniqueName: "kubernetes.io/projected/8a82408b-fba7-4fb3-8b4d-e69f68c46213-kube-api-access-zkf26") pod "kube-proxy-2jscx" (UID: "8a82408b-fba7-4fb3-8b4d-e69f68c46213") : configmap "kube-root-ca.crt" not found
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: E0914 00:16:32.487657   12123 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: E0914 00:16:32.487685   12123 projected.go:192] Error preparing data for projected volume kube-api-access-xkxsd for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: E0914 00:16:32.487730   12123 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/426dd427-8dbd-4eea-af0e-4866d84c0995-kube-api-access-xkxsd podName:426dd427-8dbd-4eea-af0e-4866d84c0995 nodeName:}" failed. No retries permitted until 2024-09-14 00:16:33.487717568 +0000 UTC m=+14.164484947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xkxsd" (UniqueName: "kubernetes.io/projected/426dd427-8dbd-4eea-af0e-4866d84c0995-kube-api-access-xkxsd") pod "storage-provisioner" (UID: "426dd427-8dbd-4eea-af0e-4866d84c0995") : configmap "kube-root-ca.crt" not found
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: I0914 00:16:32.835362   12123 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: I0914 00:16:32.843933   12123 topology_manager.go:200] "Topology Admit Handler"
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: I0914 00:16:32.891509   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/559114e0-6d4b-4794-a6ac-037fd8bc747a-config-volume\") pod \"coredns-6d4b75cb6d-dd9gm\" (UID: \"559114e0-6d4b-4794-a6ac-037fd8bc747a\") " pod="kube-system/coredns-6d4b75cb6d-dd9gm"
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: I0914 00:16:32.891532   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttm6m\" (UniqueName: \"kubernetes.io/projected/559114e0-6d4b-4794-a6ac-037fd8bc747a-kube-api-access-ttm6m\") pod \"coredns-6d4b75cb6d-dd9gm\" (UID: \"559114e0-6d4b-4794-a6ac-037fd8bc747a\") " pod="kube-system/coredns-6d4b75cb6d-dd9gm"
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: I0914 00:16:32.891562   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e42e68de-1bc5-4ded-a81b-06ca4e6b9351-config-volume\") pod \"coredns-6d4b75cb6d-hxwsv\" (UID: \"e42e68de-1bc5-4ded-a81b-06ca4e6b9351\") " pod="kube-system/coredns-6d4b75cb6d-hxwsv"
	Sep 14 00:16:32 running-upgrade-714000 kubelet[12123]: I0914 00:16:32.891600   12123 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nsjj\" (UniqueName: \"kubernetes.io/projected/e42e68de-1bc5-4ded-a81b-06ca4e6b9351-kube-api-access-4nsjj\") pod \"coredns-6d4b75cb6d-hxwsv\" (UID: \"e42e68de-1bc5-4ded-a81b-06ca4e6b9351\") " pod="kube-system/coredns-6d4b75cb6d-hxwsv"
	Sep 14 00:20:21 running-upgrade-714000 kubelet[12123]: I0914 00:20:21.724112   12123 scope.go:110] "RemoveContainer" containerID="80c9d19704af6dc4894155406b157f4b6e735e8d4e95b72182ff652dabfdd480"
	Sep 14 00:20:21 running-upgrade-714000 kubelet[12123]: I0914 00:20:21.746083   12123 scope.go:110] "RemoveContainer" containerID="3636c038ac5d05c85e5b83a113043f62ac4a87d4c5d04bd1469a56cace9d5373"
	
	
	==> storage-provisioner [6cd37f5cce2c] <==
	I0914 00:16:33.782640       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 00:16:33.787131       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 00:16:33.787150       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 00:16:33.791069       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 00:16:33.791209       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-714000_565fa937-e0b3-4156-a7ac-1cd20051f8da!
	I0914 00:16:33.791501       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af17d355-1f39-49b6-b5e4-1704fe5e7271", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-714000_565fa937-e0b3-4156-a7ac-1cd20051f8da became leader
	I0914 00:16:33.892075       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-714000_565fa937-e0b3-4156-a7ac-1cd20051f8da!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-714000 -n running-upgrade-714000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-714000 -n running-upgrade-714000: exit status 2 (15.633575625s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-714000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-714000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-714000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-714000: (2.646964917s)
--- FAIL: TestRunningBinaryUpgrade (587.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.909178875s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-171000" primary control-plane node in "kubernetes-upgrade-171000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-171000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:14:07.954314    5201 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:14:07.954443    5201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:14:07.954447    5201 out.go:358] Setting ErrFile to fd 2...
	I0913 17:14:07.954452    5201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:14:07.954569    5201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:14:07.955785    5201 out.go:352] Setting JSON to false
	I0913 17:14:07.972909    5201 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4411,"bootTime":1726268436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:14:07.972988    5201 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:14:07.978281    5201 out.go:177] * [kubernetes-upgrade-171000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:14:07.987183    5201 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:14:07.987232    5201 notify.go:220] Checking for updates...
	I0913 17:14:07.995144    5201 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:14:07.998158    5201 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:14:08.001173    5201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:14:08.004081    5201 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:14:08.007133    5201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:14:08.010364    5201 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:14:08.010427    5201 config.go:182] Loaded profile config "running-upgrade-714000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:14:08.010467    5201 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:14:08.015145    5201 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:14:08.022099    5201 start.go:297] selected driver: qemu2
	I0913 17:14:08.022104    5201 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:14:08.022110    5201 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:14:08.024450    5201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:14:08.027111    5201 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:14:08.030218    5201 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 17:14:08.030231    5201 cni.go:84] Creating CNI manager for ""
	I0913 17:14:08.030251    5201 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 17:14:08.030279    5201 start.go:340] cluster config:
	{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:14:08.033777    5201 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:14:08.041119    5201 out.go:177] * Starting "kubernetes-upgrade-171000" primary control-plane node in "kubernetes-upgrade-171000" cluster
	I0913 17:14:08.044886    5201 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 17:14:08.044898    5201 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 17:14:08.044905    5201 cache.go:56] Caching tarball of preloaded images
	I0913 17:14:08.044954    5201 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:14:08.044960    5201 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 17:14:08.045008    5201 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/kubernetes-upgrade-171000/config.json ...
	I0913 17:14:08.045018    5201 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/kubernetes-upgrade-171000/config.json: {Name:mk81f05ab361e4ca550792eca8e64bd73dd43ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:14:08.045366    5201 start.go:360] acquireMachinesLock for kubernetes-upgrade-171000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:14:08.045400    5201 start.go:364] duration metric: took 27.917µs to acquireMachinesLock for "kubernetes-upgrade-171000"
	I0913 17:14:08.045410    5201 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:14:08.045434    5201 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:14:08.050128    5201 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:14:08.065647    5201 start.go:159] libmachine.API.Create for "kubernetes-upgrade-171000" (driver="qemu2")
	I0913 17:14:08.065674    5201 client.go:168] LocalClient.Create starting
	I0913 17:14:08.065740    5201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:14:08.065771    5201 main.go:141] libmachine: Decoding PEM data...
	I0913 17:14:08.065781    5201 main.go:141] libmachine: Parsing certificate...
	I0913 17:14:08.065818    5201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:14:08.065846    5201 main.go:141] libmachine: Decoding PEM data...
	I0913 17:14:08.065853    5201 main.go:141] libmachine: Parsing certificate...
	I0913 17:14:08.066255    5201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:14:08.272627    5201 main.go:141] libmachine: Creating SSH key...
	I0913 17:14:08.357150    5201 main.go:141] libmachine: Creating Disk image...
	I0913 17:14:08.357159    5201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:14:08.357348    5201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0913 17:14:08.366909    5201 main.go:141] libmachine: STDOUT: 
	I0913 17:14:08.366927    5201 main.go:141] libmachine: STDERR: 
	I0913 17:14:08.366983    5201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2 +20000M
	I0913 17:14:08.375266    5201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:14:08.375282    5201 main.go:141] libmachine: STDERR: 
	I0913 17:14:08.375300    5201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0913 17:14:08.375306    5201 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:14:08.375319    5201 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:14:08.375342    5201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:7a:77:1f:02:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0913 17:14:08.377020    5201 main.go:141] libmachine: STDOUT: 
	I0913 17:14:08.377036    5201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:14:08.377059    5201 client.go:171] duration metric: took 311.379833ms to LocalClient.Create
	I0913 17:14:10.379280    5201 start.go:128] duration metric: took 2.333850375s to createHost
	I0913 17:14:10.379391    5201 start.go:83] releasing machines lock for "kubernetes-upgrade-171000", held for 2.334016333s
	W0913 17:14:10.379483    5201 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:14:10.395846    5201 out.go:177] * Deleting "kubernetes-upgrade-171000" in qemu2 ...
	W0913 17:14:10.429501    5201 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:14:10.429535    5201 start.go:729] Will try again in 5 seconds ...
	I0913 17:14:15.431730    5201 start.go:360] acquireMachinesLock for kubernetes-upgrade-171000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:14:15.432274    5201 start.go:364] duration metric: took 425.542µs to acquireMachinesLock for "kubernetes-upgrade-171000"
	I0913 17:14:15.432448    5201 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:14:15.432758    5201 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:14:15.438350    5201 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:14:15.491382    5201 start.go:159] libmachine.API.Create for "kubernetes-upgrade-171000" (driver="qemu2")
	I0913 17:14:15.491439    5201 client.go:168] LocalClient.Create starting
	I0913 17:14:15.491560    5201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:14:15.491657    5201 main.go:141] libmachine: Decoding PEM data...
	I0913 17:14:15.491674    5201 main.go:141] libmachine: Parsing certificate...
	I0913 17:14:15.491751    5201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:14:15.491798    5201 main.go:141] libmachine: Decoding PEM data...
	I0913 17:14:15.491822    5201 main.go:141] libmachine: Parsing certificate...
	I0913 17:14:15.492384    5201 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:14:15.662796    5201 main.go:141] libmachine: Creating SSH key...
	I0913 17:14:15.762531    5201 main.go:141] libmachine: Creating Disk image...
	I0913 17:14:15.762536    5201 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:14:15.762702    5201 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0913 17:14:15.772482    5201 main.go:141] libmachine: STDOUT: 
	I0913 17:14:15.772504    5201 main.go:141] libmachine: STDERR: 
	I0913 17:14:15.772580    5201 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2 +20000M
	I0913 17:14:15.780693    5201 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:14:15.780708    5201 main.go:141] libmachine: STDERR: 
	I0913 17:14:15.780721    5201 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0913 17:14:15.780726    5201 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:14:15.780737    5201 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:14:15.780782    5201 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:04:c5:5d:1b:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0913 17:14:15.782538    5201 main.go:141] libmachine: STDOUT: 
	I0913 17:14:15.782564    5201 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:14:15.782581    5201 client.go:171] duration metric: took 291.141209ms to LocalClient.Create
	I0913 17:14:17.784757    5201 start.go:128] duration metric: took 2.35199625s to createHost
	I0913 17:14:17.784850    5201 start.go:83] releasing machines lock for "kubernetes-upgrade-171000", held for 2.352587667s
	W0913 17:14:17.785240    5201 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:14:17.801784    5201 out.go:201] 
	W0913 17:14:17.805989    5201 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:14:17.806016    5201 out.go:270] * 
	* 
	W0913 17:14:17.808444    5201 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:14:17.821824    5201 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-171000
E0913 17:14:18.627089    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-171000: (3.696996334s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-171000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-171000 status --format={{.Host}}: exit status 7 (32.173875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.18640625s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-171000" primary control-plane node in "kubernetes-upgrade-171000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-171000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:14:21.594298    5236 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:14:21.594429    5236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:14:21.594433    5236 out.go:358] Setting ErrFile to fd 2...
	I0913 17:14:21.594435    5236 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:14:21.594587    5236 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:14:21.595597    5236 out.go:352] Setting JSON to false
	I0913 17:14:21.612064    5236 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4425,"bootTime":1726268436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:14:21.612132    5236 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:14:21.616808    5236 out.go:177] * [kubernetes-upgrade-171000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:14:21.623719    5236 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:14:21.623780    5236 notify.go:220] Checking for updates...
	I0913 17:14:21.633852    5236 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:14:21.637821    5236 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:14:21.640822    5236 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:14:21.643812    5236 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:14:21.646834    5236 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:14:21.650144    5236 config.go:182] Loaded profile config "kubernetes-upgrade-171000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0913 17:14:21.650386    5236 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:14:21.654824    5236 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:14:21.661800    5236 start.go:297] selected driver: qemu2
	I0913 17:14:21.661806    5236 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:14:21.661872    5236 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:14:21.664206    5236 cni.go:84] Creating CNI manager for ""
	I0913 17:14:21.664242    5236 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:14:21.664267    5236 start.go:340] cluster config:
	{Name:kubernetes-upgrade-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-171000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:14:21.667927    5236 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:14:21.674808    5236 out.go:177] * Starting "kubernetes-upgrade-171000" primary control-plane node in "kubernetes-upgrade-171000" cluster
	I0913 17:14:21.678761    5236 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:14:21.678779    5236 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:14:21.678788    5236 cache.go:56] Caching tarball of preloaded images
	I0913 17:14:21.678851    5236 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:14:21.678857    5236 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:14:21.678908    5236 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/kubernetes-upgrade-171000/config.json ...
	I0913 17:14:21.679447    5236 start.go:360] acquireMachinesLock for kubernetes-upgrade-171000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:14:21.679476    5236 start.go:364] duration metric: took 21.75µs to acquireMachinesLock for "kubernetes-upgrade-171000"
	I0913 17:14:21.679484    5236 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:14:21.679490    5236 fix.go:54] fixHost starting: 
	I0913 17:14:21.679616    5236 fix.go:112] recreateIfNeeded on kubernetes-upgrade-171000: state=Stopped err=<nil>
	W0913 17:14:21.679624    5236 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:14:21.682842    5236 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-171000" ...
	I0913 17:14:21.690615    5236 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:14:21.690650    5236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:04:c5:5d:1b:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0913 17:14:21.692865    5236 main.go:141] libmachine: STDOUT: 
	I0913 17:14:21.692884    5236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:14:21.692912    5236 fix.go:56] duration metric: took 13.420459ms for fixHost
	I0913 17:14:21.692916    5236 start.go:83] releasing machines lock for "kubernetes-upgrade-171000", held for 13.435875ms
	W0913 17:14:21.692921    5236 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:14:21.692956    5236 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:14:21.692960    5236 start.go:729] Will try again in 5 seconds ...
	I0913 17:14:26.695102    5236 start.go:360] acquireMachinesLock for kubernetes-upgrade-171000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:14:26.695652    5236 start.go:364] duration metric: took 441.958µs to acquireMachinesLock for "kubernetes-upgrade-171000"
	I0913 17:14:26.695833    5236 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:14:26.695857    5236 fix.go:54] fixHost starting: 
	I0913 17:14:26.696612    5236 fix.go:112] recreateIfNeeded on kubernetes-upgrade-171000: state=Stopped err=<nil>
	W0913 17:14:26.696638    5236 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:14:26.705207    5236 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-171000" ...
	I0913 17:14:26.709207    5236 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:14:26.709491    5236 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:04:c5:5d:1b:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubernetes-upgrade-171000/disk.qcow2
	I0913 17:14:26.719599    5236 main.go:141] libmachine: STDOUT: 
	I0913 17:14:26.719675    5236 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:14:26.719765    5236 fix.go:56] duration metric: took 23.910708ms for fixHost
	I0913 17:14:26.719782    5236 start.go:83] releasing machines lock for "kubernetes-upgrade-171000", held for 24.103667ms
	W0913 17:14:26.719963    5236 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-171000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:14:26.728131    5236 out.go:201] 
	W0913 17:14:26.731281    5236 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:14:26.731311    5236 out.go:270] * 
	* 
	W0913 17:14:26.733242    5236 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:14:26.739200    5236 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-171000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-171000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-171000 version --output=json: exit status 1 (68.534792ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-171000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-13 17:14:26.82327 -0700 PDT m=+2934.329604043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-171000 -n kubernetes-upgrade-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-171000 -n kubernetes-upgrade-171000: exit status 7 (33.470458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-171000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-171000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-171000
--- FAIL: TestKubernetesUpgrade (19.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.23s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19640
- KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1611350491/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.23s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.71s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19640
- KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current895413472/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (573.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.694118312 start -p stopped-upgrade-434000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.694118312 start -p stopped-upgrade-434000 --memory=2200 --vm-driver=qemu2 : (39.891214666s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.694118312 -p stopped-upgrade-434000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.694118312 -p stopped-upgrade-434000 stop: (12.122615625s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-434000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0913 17:17:21.719331    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 17:18:42.441359    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
E0913 17:19:18.622434    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-434000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.172961625s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-434000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-434000" primary control-plane node in "stopped-upgrade-434000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-434000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:15:19.985900    5271 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:15:19.986089    5271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:15:19.986094    5271 out.go:358] Setting ErrFile to fd 2...
	I0913 17:15:19.986097    5271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:15:19.986274    5271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:15:19.987676    5271 out.go:352] Setting JSON to false
	I0913 17:15:20.007126    5271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4483,"bootTime":1726268436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:15:20.007208    5271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:15:20.010630    5271 out.go:177] * [stopped-upgrade-434000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:15:20.017576    5271 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:15:20.017614    5271 notify.go:220] Checking for updates...
	I0913 17:15:20.024526    5271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:15:20.027513    5271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:15:20.031541    5271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:15:20.034530    5271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:15:20.037556    5271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:15:20.040800    5271 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:15:20.043513    5271 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 17:15:20.046585    5271 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:15:20.050515    5271 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:15:20.057505    5271 start.go:297] selected driver: qemu2
	I0913 17:15:20.057510    5271 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 17:15:20.057557    5271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:15:20.060154    5271 cni.go:84] Creating CNI manager for ""
	I0913 17:15:20.060187    5271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:15:20.060212    5271 start.go:340] cluster config:
	{Name:stopped-upgrade-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 17:15:20.060261    5271 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:15:20.068497    5271 out.go:177] * Starting "stopped-upgrade-434000" primary control-plane node in "stopped-upgrade-434000" cluster
	I0913 17:15:20.072573    5271 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 17:15:20.072588    5271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0913 17:15:20.072598    5271 cache.go:56] Caching tarball of preloaded images
	I0913 17:15:20.072659    5271 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:15:20.072665    5271 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0913 17:15:20.072716    5271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/config.json ...
	I0913 17:15:20.073192    5271 start.go:360] acquireMachinesLock for stopped-upgrade-434000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:15:20.073225    5271 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "stopped-upgrade-434000"
	I0913 17:15:20.073232    5271 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:15:20.073238    5271 fix.go:54] fixHost starting: 
	I0913 17:15:20.073335    5271 fix.go:112] recreateIfNeeded on stopped-upgrade-434000: state=Stopped err=<nil>
	W0913 17:15:20.073342    5271 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:15:20.081548    5271 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-434000" ...
	I0913 17:15:20.085412    5271 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:15:20.085479    5271 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50468-:22,hostfwd=tcp::50469-:2376,hostname=stopped-upgrade-434000 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/disk.qcow2
	I0913 17:15:20.131839    5271 main.go:141] libmachine: STDOUT: 
	I0913 17:15:20.131865    5271 main.go:141] libmachine: STDERR: 
	I0913 17:15:20.131871    5271 main.go:141] libmachine: Waiting for VM to start (ssh -p 50468 docker@127.0.0.1)...
	I0913 17:15:39.616795    5271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/config.json ...
	I0913 17:15:39.617180    5271 machine.go:93] provisionDockerMachine start ...
	I0913 17:15:39.617266    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:39.617527    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:39.617538    5271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 17:15:39.690882    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0913 17:15:39.690900    5271 buildroot.go:166] provisioning hostname "stopped-upgrade-434000"
	I0913 17:15:39.690980    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:39.691141    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:39.691153    5271 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-434000 && echo "stopped-upgrade-434000" | sudo tee /etc/hostname
	I0913 17:15:39.763986    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-434000
	
	I0913 17:15:39.764055    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:39.764232    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:39.764247    5271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-434000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-434000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-434000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 17:15:39.832330    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 17:15:39.832346    5271 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19640-1360/.minikube CaCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19640-1360/.minikube}
	I0913 17:15:39.832359    5271 buildroot.go:174] setting up certificates
	I0913 17:15:39.832369    5271 provision.go:84] configureAuth start
	I0913 17:15:39.832374    5271 provision.go:143] copyHostCerts
	I0913 17:15:39.832453    5271 exec_runner.go:144] found /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem, removing ...
	I0913 17:15:39.832462    5271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem
	I0913 17:15:39.832559    5271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.pem (1078 bytes)
	I0913 17:15:39.832732    5271 exec_runner.go:144] found /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem, removing ...
	I0913 17:15:39.832737    5271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem
	I0913 17:15:39.832782    5271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/cert.pem (1123 bytes)
	I0913 17:15:39.832882    5271 exec_runner.go:144] found /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem, removing ...
	I0913 17:15:39.832885    5271 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem
	I0913 17:15:39.832927    5271 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19640-1360/.minikube/key.pem (1679 bytes)
	I0913 17:15:39.833072    5271 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-434000 san=[127.0.0.1 localhost minikube stopped-upgrade-434000]
	I0913 17:15:39.895458    5271 provision.go:177] copyRemoteCerts
	I0913 17:15:39.895494    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 17:15:39.895503    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:15:39.931964    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0913 17:15:39.938977    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0913 17:15:39.945469    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0913 17:15:39.954609    5271 provision.go:87] duration metric: took 122.231917ms to configureAuth
	I0913 17:15:39.954619    5271 buildroot.go:189] setting minikube options for container-runtime
	I0913 17:15:39.954737    5271 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:15:39.954787    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:39.954880    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:39.954885    5271 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 17:15:40.018308    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0913 17:15:40.018320    5271 buildroot.go:70] root file system type: tmpfs
	I0913 17:15:40.018371    5271 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 17:15:40.018422    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:40.018528    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:40.018561    5271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 17:15:40.085798    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 17:15:40.085856    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:40.085966    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:40.085975    5271 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 17:15:40.457857    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0913 17:15:40.457873    5271 machine.go:96] duration metric: took 840.695959ms to provisionDockerMachine
	I0913 17:15:40.457895    5271 start.go:293] postStartSetup for "stopped-upgrade-434000" (driver="qemu2")
	I0913 17:15:40.457905    5271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 17:15:40.457964    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 17:15:40.457972    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:15:40.492724    5271 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 17:15:40.494190    5271 info.go:137] Remote host: Buildroot 2021.02.12
	I0913 17:15:40.494199    5271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19640-1360/.minikube/addons for local assets ...
	I0913 17:15:40.494286    5271 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19640-1360/.minikube/files for local assets ...
	I0913 17:15:40.494415    5271 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem -> 18822.pem in /etc/ssl/certs
	I0913 17:15:40.494559    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0913 17:15:40.497281    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem --> /etc/ssl/certs/18822.pem (1708 bytes)
	I0913 17:15:40.504640    5271 start.go:296] duration metric: took 46.734334ms for postStartSetup
	I0913 17:15:40.504661    5271 fix.go:56] duration metric: took 20.431731167s for fixHost
	I0913 17:15:40.504723    5271 main.go:141] libmachine: Using SSH client type: native
	I0913 17:15:40.504847    5271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010ad190] 0x1010af9d0 <nil>  [] 0s} localhost 50468 <nil> <nil>}
	I0913 17:15:40.504853    5271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0913 17:15:40.569450    5271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726272940.665142712
	
	I0913 17:15:40.569461    5271 fix.go:216] guest clock: 1726272940.665142712
	I0913 17:15:40.569470    5271 fix.go:229] Guest: 2024-09-13 17:15:40.665142712 -0700 PDT Remote: 2024-09-13 17:15:40.504663 -0700 PDT m=+20.548086001 (delta=160.479712ms)
	I0913 17:15:40.569482    5271 fix.go:200] guest clock delta is within tolerance: 160.479712ms
	I0913 17:15:40.569485    5271 start.go:83] releasing machines lock for "stopped-upgrade-434000", held for 20.496563042s
	I0913 17:15:40.569568    5271 ssh_runner.go:195] Run: cat /version.json
	I0913 17:15:40.569577    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:15:40.569568    5271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 17:15:40.569607    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	W0913 17:15:40.570292    5271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:50595->127.0.0.1:50468: write: broken pipe
	I0913 17:15:40.570308    5271 retry.go:31] will retry after 146.226762ms: ssh: handshake failed: write tcp 127.0.0.1:50595->127.0.0.1:50468: write: broken pipe
	W0913 17:15:40.601685    5271 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0913 17:15:40.601732    5271 ssh_runner.go:195] Run: systemctl --version
	I0913 17:15:40.603516    5271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0913 17:15:40.605109    5271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 17:15:40.605142    5271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0913 17:15:40.608331    5271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0913 17:15:40.612890    5271 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0913 17:15:40.612899    5271 start.go:495] detecting cgroup driver to use...
	I0913 17:15:40.612978    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 17:15:40.618886    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0913 17:15:40.622166    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 17:15:40.625112    5271 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 17:15:40.625141    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 17:15:40.628490    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 17:15:40.631646    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 17:15:40.634452    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 17:15:40.637191    5271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 17:15:40.640515    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 17:15:40.643755    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 17:15:40.646629    5271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 17:15:40.649508    5271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 17:15:40.652798    5271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 17:15:40.655883    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:40.725868    5271 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 17:15:40.734632    5271 start.go:495] detecting cgroup driver to use...
	I0913 17:15:40.734712    5271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 17:15:40.740689    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 17:15:40.746036    5271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0913 17:15:40.757120    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0913 17:15:40.797277    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 17:15:40.802100    5271 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0913 17:15:40.863152    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 17:15:40.868567    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 17:15:40.874430    5271 ssh_runner.go:195] Run: which cri-dockerd
	I0913 17:15:40.875584    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 17:15:40.878017    5271 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0913 17:15:40.882645    5271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 17:15:40.965661    5271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 17:15:41.045695    5271 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 17:15:41.045755    5271 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 17:15:41.050983    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:41.131973    5271 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 17:15:42.290854    5271 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158880708s)
	I0913 17:15:42.290922    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 17:15:42.295470    5271 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0913 17:15:42.301380    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 17:15:42.306128    5271 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 17:15:42.384907    5271 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 17:15:42.469943    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:42.549101    5271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 17:15:42.555551    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 17:15:42.559816    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:42.626376    5271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 17:15:42.664839    5271 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 17:15:42.664923    5271 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 17:15:42.667499    5271 start.go:563] Will wait 60s for crictl version
	I0913 17:15:42.667553    5271 ssh_runner.go:195] Run: which crictl
	I0913 17:15:42.668929    5271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 17:15:42.683430    5271 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0913 17:15:42.683506    5271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 17:15:42.702185    5271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 17:15:42.723581    5271 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0913 17:15:42.723736    5271 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0913 17:15:42.725122    5271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 17:15:42.729124    5271 kubeadm.go:883] updating cluster {Name:stopped-upgrade-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0913 17:15:42.729179    5271 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0913 17:15:42.729229    5271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 17:15:42.742879    5271 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 17:15:42.742889    5271 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 17:15:42.742937    5271 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 17:15:42.745877    5271 ssh_runner.go:195] Run: which lz4
	I0913 17:15:42.747187    5271 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0913 17:15:42.748375    5271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0913 17:15:42.748385    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0913 17:15:43.642872    5271 docker.go:649] duration metric: took 895.743959ms to copy over tarball
	I0913 17:15:43.642935    5271 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0913 17:15:44.807380    5271 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.164436208s)
	I0913 17:15:44.807404    5271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0913 17:15:44.823160    5271 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0913 17:15:44.826039    5271 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0913 17:15:44.830824    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:44.909372    5271 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 17:15:47.459094    5271 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.549738958s)
	I0913 17:15:47.459202    5271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 17:15:47.472393    5271 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 17:15:47.472413    5271 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0913 17:15:47.472418    5271 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0913 17:15:47.481594    5271 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:47.483840    5271 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.484686    5271 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:47.484738    5271 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.487208    5271 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.487561    5271 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.488776    5271 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.488843    5271 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.490080    5271 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:47.490334    5271 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.491275    5271 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0913 17:15:47.491636    5271 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.492404    5271 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:47.492458    5271 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:47.493206    5271 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0913 17:15:47.493837    5271 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:47.924731    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.932159    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.937188    5271 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0913 17:15:47.937216    5271 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.937283    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0913 17:15:47.949377    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.951227    5271 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0913 17:15:47.951247    5271 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.951290    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0913 17:15:47.952889    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.952996    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0913 17:15:47.966741    5271 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0913 17:15:47.966765    5271 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.966831    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0913 17:15:47.969294    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:47.972792    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0913 17:15:47.975110    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0913 17:15:47.975121    5271 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0913 17:15:47.975177    5271 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.975207    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0913 17:15:47.986621    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0913 17:15:47.986629    5271 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0913 17:15:47.986645    5271 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:47.986703    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0913 17:15:48.000785    5271 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0913 17:15:48.000803    5271 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0913 17:15:48.000870    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W0913 17:15:48.001892    5271 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0913 17:15:48.001999    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:48.007921    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0913 17:15:48.007978    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0913 17:15:48.016370    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0913 17:15:48.016498    5271 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0913 17:15:48.017973    5271 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0913 17:15:48.017994    5271 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:48.018047    5271 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0913 17:15:48.019895    5271 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0913 17:15:48.019913    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0913 17:15:48.027139    5271 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0913 17:15:48.027152    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0913 17:15:48.031606    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0913 17:15:48.031736    5271 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0913 17:15:48.056336    5271 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0913 17:15:48.056376    5271 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0913 17:15:48.056401    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0913 17:15:48.102152    5271 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0913 17:15:48.102181    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0913 17:15:48.142131    5271 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0913 17:15:48.267548    5271 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0913 17:15:48.267670    5271 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:48.278781    5271 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0913 17:15:48.278806    5271 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:48.278873    5271 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:15:48.294029    5271 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0913 17:15:48.294166    5271 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0913 17:15:48.295493    5271 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0913 17:15:48.295507    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0913 17:15:48.328759    5271 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0913 17:15:48.328773    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0913 17:15:48.580421    5271 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0913 17:15:48.580466    5271 cache_images.go:92] duration metric: took 1.108058708s to LoadCachedImages
	W0913 17:15:48.580507    5271 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0913 17:15:48.580512    5271 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0913 17:15:48.580574    5271 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-434000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 17:15:48.580648    5271 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 17:15:48.593873    5271 cni.go:84] Creating CNI manager for ""
	I0913 17:15:48.593886    5271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:15:48.593894    5271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 17:15:48.593910    5271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-434000 NodeName:stopped-upgrade-434000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 17:15:48.593976    5271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-434000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 17:15:48.594043    5271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0913 17:15:48.596760    5271 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 17:15:48.596792    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 17:15:48.599599    5271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0913 17:15:48.604569    5271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 17:15:48.609483    5271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0913 17:15:48.614828    5271 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0913 17:15:48.615960    5271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 17:15:48.619817    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:15:48.696213    5271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 17:15:48.702680    5271 certs.go:68] Setting up /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000 for IP: 10.0.2.15
	I0913 17:15:48.702689    5271 certs.go:194] generating shared ca certs ...
	I0913 17:15:48.702698    5271 certs.go:226] acquiring lock for ca certs: {Name:mka1fd556c9b3f29c4a4f622bab1c9ab3ca42c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:15:48.702872    5271 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key
	I0913 17:15:48.702927    5271 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key
	I0913 17:15:48.702934    5271 certs.go:256] generating profile certs ...
	I0913 17:15:48.703007    5271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.key
	I0913 17:15:48.703025    5271 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key.80b5d6c6
	I0913 17:15:48.703037    5271 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt.80b5d6c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0913 17:15:48.840023    5271 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt.80b5d6c6 ...
	I0913 17:15:48.840036    5271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt.80b5d6c6: {Name:mkb5c88ac1f7f13f2e6e0a96f7a3818c09276c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:15:48.840350    5271 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key.80b5d6c6 ...
	I0913 17:15:48.840356    5271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key.80b5d6c6: {Name:mk4fc2536626eac333b238412708d9e9a1843fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:15:48.840485    5271 certs.go:381] copying /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt.80b5d6c6 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt
	I0913 17:15:48.840694    5271 certs.go:385] copying /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key.80b5d6c6 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key
	I0913 17:15:48.840863    5271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/proxy-client.key
	I0913 17:15:48.840997    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/1882.pem (1338 bytes)
	W0913 17:15:48.841025    5271 certs.go:480] ignoring /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/1882_empty.pem, impossibly tiny 0 bytes
	I0913 17:15:48.841043    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 17:15:48.841077    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem (1078 bytes)
	I0913 17:15:48.841098    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem (1123 bytes)
	I0913 17:15:48.841119    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/key.pem (1679 bytes)
	I0913 17:15:48.841165    5271 certs.go:484] found cert: /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem (1708 bytes)
	I0913 17:15:48.841483    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 17:15:48.848693    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 17:15:48.856180    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 17:15:48.863631    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 17:15:48.871352    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0913 17:15:48.878629    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 17:15:48.885151    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 17:15:48.892508    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0913 17:15:48.899865    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 17:15:48.906840    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/1882.pem --> /usr/share/ca-certificates/1882.pem (1338 bytes)
	I0913 17:15:48.913696    5271 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/ssl/certs/18822.pem --> /usr/share/ca-certificates/18822.pem (1708 bytes)
	I0913 17:15:48.920658    5271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 17:15:48.925932    5271 ssh_runner.go:195] Run: openssl version
	I0913 17:15:48.927786    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 17:15:48.930676    5271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 17:15:48.932130    5271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I0913 17:15:48.932153    5271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 17:15:48.934046    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 17:15:48.937199    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1882.pem && ln -fs /usr/share/ca-certificates/1882.pem /etc/ssl/certs/1882.pem"
	I0913 17:15:48.940501    5271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1882.pem
	I0913 17:15:48.941872    5271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 13 23:41 /usr/share/ca-certificates/1882.pem
	I0913 17:15:48.941892    5271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1882.pem
	I0913 17:15:48.943628    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1882.pem /etc/ssl/certs/51391683.0"
	I0913 17:15:48.946572    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18822.pem && ln -fs /usr/share/ca-certificates/18822.pem /etc/ssl/certs/18822.pem"
	I0913 17:15:48.949378    5271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18822.pem
	I0913 17:15:48.950768    5271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 13 23:41 /usr/share/ca-certificates/18822.pem
	I0913 17:15:48.950790    5271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18822.pem
	I0913 17:15:48.952651    5271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/18822.pem /etc/ssl/certs/3ec20f2e.0"
	I0913 17:15:48.955956    5271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 17:15:48.957565    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0913 17:15:48.959696    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0913 17:15:48.961627    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0913 17:15:48.963595    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0913 17:15:48.965567    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0913 17:15:48.967424    5271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0913 17:15:48.969275    5271 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50503 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0913 17:15:48.969358    5271 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 17:15:48.979400    5271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 17:15:48.982338    5271 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0913 17:15:48.982343    5271 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0913 17:15:48.982368    5271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0913 17:15:48.985794    5271 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0913 17:15:48.986081    5271 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-434000" does not appear in /Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:15:48.986179    5271 kubeconfig.go:62] /Users/jenkins/minikube-integration/19640-1360/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-434000" cluster setting kubeconfig missing "stopped-upgrade-434000" context setting]
	I0913 17:15:48.986399    5271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/kubeconfig: {Name:mke2b016812cedc34ffbfc79dbc5c22d8c43c377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:15:48.986850    5271 kapi.go:59] client config for stopped-upgrade-434000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.key", CAFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102685800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 17:15:48.987175    5271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0913 17:15:48.989908    5271 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-434000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0913 17:15:48.989916    5271 kubeadm.go:1160] stopping kube-system containers ...
	I0913 17:15:48.989964    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 17:15:49.000885    5271 docker.go:483] Stopping containers: [82408eec4148 5a0624279b19 bae4a9a1e6b5 b1a82bf46d1b c4642c4570af 4396aa229875 3e54a98c5ad8 6920b725f6d5]
	I0913 17:15:49.000960    5271 ssh_runner.go:195] Run: docker stop 82408eec4148 5a0624279b19 bae4a9a1e6b5 b1a82bf46d1b c4642c4570af 4396aa229875 3e54a98c5ad8 6920b725f6d5
	I0913 17:15:49.011910    5271 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0913 17:15:49.017833    5271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 17:15:49.020997    5271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 17:15:49.021003    5271 kubeadm.go:157] found existing configuration files:
	
	I0913 17:15:49.021029    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0913 17:15:49.024152    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 17:15:49.024183    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 17:15:49.027224    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0913 17:15:49.029551    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 17:15:49.029571    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 17:15:49.032546    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0913 17:15:49.035606    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 17:15:49.035629    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 17:15:49.038166    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0913 17:15:49.040782    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 17:15:49.040802    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 17:15:49.043776    5271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 17:15:49.046559    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.068163    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.478357    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.599806    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.625758    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0913 17:15:49.648497    5271 api_server.go:52] waiting for apiserver process to appear ...
	I0913 17:15:49.648579    5271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:15:50.150663    5271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:15:50.650628    5271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:15:50.654753    5271 api_server.go:72] duration metric: took 1.006272167s to wait for apiserver process to appear ...
	I0913 17:15:50.654764    5271 api_server.go:88] waiting for apiserver healthz status ...
	I0913 17:15:50.654772    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:15:55.655415    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:15:55.655472    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:00.655926    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:00.655970    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:05.656400    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:05.656446    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:10.656692    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:10.656722    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:15.656949    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:15.656974    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:20.657285    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:20.657339    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:25.657855    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:25.658032    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:30.659064    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:30.659089    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:35.660007    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:35.660068    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:40.661225    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:40.661252    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:45.662695    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:45.662715    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:50.664279    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:50.664454    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:16:50.679449    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:16:50.679539    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:16:50.691529    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:16:50.691615    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:16:50.701973    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:16:50.702044    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:16:50.712877    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:16:50.712965    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:16:50.724306    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:16:50.724403    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:16:50.737511    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:16:50.737586    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:16:50.747957    5271 logs.go:276] 0 containers: []
	W0913 17:16:50.747971    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:16:50.748043    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:16:50.758914    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:16:50.758931    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:16:50.758936    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:16:50.773077    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:16:50.773087    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:16:50.784619    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:16:50.784630    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:16:50.799227    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:16:50.799237    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:16:50.811236    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:16:50.811245    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:16:50.828772    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:16:50.828783    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:16:50.842463    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:16:50.842476    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:16:50.853502    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:16:50.853516    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:16:50.865761    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:16:50.865773    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:16:50.942688    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:16:50.942708    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:16:50.956622    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:16:50.956632    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:16:50.983336    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:16:50.983347    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:16:50.998820    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:16:50.998834    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:16:51.009969    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:16:51.009983    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:16:51.050166    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:16:51.050177    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:16:51.054375    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:16:51.054383    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:16:53.580287    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:16:58.582442    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:16:58.582674    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:16:58.599573    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:16:58.599683    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:16:58.612114    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:16:58.612206    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:16:58.622926    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:16:58.623012    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:16:58.634026    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:16:58.634140    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:16:58.644877    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:16:58.644955    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:16:58.655561    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:16:58.655645    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:16:58.666229    5271 logs.go:276] 0 containers: []
	W0913 17:16:58.666242    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:16:58.666313    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:16:58.680873    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:16:58.680887    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:16:58.680893    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:16:58.704692    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:16:58.704704    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:16:58.718570    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:16:58.718580    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:16:58.732283    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:16:58.732297    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:16:58.748365    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:16:58.748375    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:16:58.773177    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:16:58.773186    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:16:58.811978    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:16:58.811985    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:16:58.825684    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:16:58.825696    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:16:58.829842    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:16:58.829852    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:16:58.865749    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:16:58.865761    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:16:58.880101    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:16:58.880114    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:16:58.892626    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:16:58.892642    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:16:58.906386    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:16:58.906399    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:16:58.918036    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:16:58.918051    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:16:58.932649    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:16:58.932660    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:16:58.949778    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:16:58.949789    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:01.463116    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:06.465450    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:06.465728    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:06.489839    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:06.489978    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:06.506398    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:06.506507    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:06.519091    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:06.519179    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:06.530774    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:06.530857    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:06.541686    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:06.541759    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:06.552671    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:06.552753    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:06.563452    5271 logs.go:276] 0 containers: []
	W0913 17:17:06.563463    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:06.563535    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:06.574160    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:06.574180    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:06.574186    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:06.613327    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:06.613337    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:06.626944    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:06.626959    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:06.644505    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:06.644516    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:06.655946    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:06.655956    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:06.668613    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:06.668623    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:06.679965    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:06.679975    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:06.694458    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:06.694470    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:06.706423    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:06.706433    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:06.720774    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:06.720785    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:06.748172    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:06.748187    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:06.760009    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:06.760021    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:06.785755    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:06.785767    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:06.797146    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:06.797158    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:06.801871    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:06.801878    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:06.839020    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:06.839031    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:09.359167    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:14.361381    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:14.361496    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:14.372821    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:14.372902    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:14.383845    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:14.383928    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:14.398675    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:14.398747    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:14.408955    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:14.409024    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:14.419215    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:14.419296    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:14.429958    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:14.430037    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:14.440407    5271 logs.go:276] 0 containers: []
	W0913 17:17:14.440419    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:14.440495    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:14.451323    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:14.451341    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:14.451347    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:14.465571    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:14.465585    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:14.498160    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:14.498174    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:14.513256    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:14.513267    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:14.526021    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:14.526035    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:14.550619    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:14.550628    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:14.562295    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:14.562305    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:14.566816    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:14.566824    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:14.601248    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:14.601259    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:14.612609    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:14.612620    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:14.624046    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:14.624061    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:14.642122    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:14.642132    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:14.653574    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:14.653586    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:14.691504    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:14.691516    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:14.705167    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:14.705180    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:14.719991    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:14.720002    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:17.233668    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:22.235981    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:22.236457    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:22.269753    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:22.269909    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:22.289667    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:22.289786    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:22.304591    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:22.304690    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:22.316717    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:22.316800    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:22.326890    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:22.326975    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:22.339241    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:22.339325    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:22.349461    5271 logs.go:276] 0 containers: []
	W0913 17:17:22.349473    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:22.349539    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:22.360060    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:22.360079    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:22.360085    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:22.375008    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:22.375018    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:22.395016    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:22.395026    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:22.420500    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:22.420511    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:22.432395    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:22.432405    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:22.444087    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:22.444100    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:22.478801    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:22.478813    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:22.492081    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:22.492093    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:22.504079    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:22.504092    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:22.522516    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:22.522529    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:22.540258    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:22.540272    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:22.580054    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:22.580070    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:22.585111    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:22.585120    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:22.600366    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:22.600380    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:22.611745    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:22.611757    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:22.626232    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:22.626243    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:25.152069    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:30.154293    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:30.154472    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:30.167830    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:30.167926    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:30.179663    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:30.179749    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:30.189960    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:30.190045    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:30.200167    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:30.200253    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:30.215884    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:30.215973    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:30.226389    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:30.226474    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:30.236629    5271 logs.go:276] 0 containers: []
	W0913 17:17:30.236642    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:30.236711    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:30.247427    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:30.247446    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:30.247452    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:30.261308    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:30.261322    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:30.272503    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:30.272515    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:30.289620    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:30.289632    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:30.304337    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:30.304350    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:30.340623    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:30.340635    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:30.354640    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:30.354653    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:30.384130    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:30.384141    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:30.398000    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:30.398011    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:30.412569    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:30.412579    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:30.424222    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:30.424232    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:30.436412    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:30.436428    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:30.447845    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:30.447859    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:30.470666    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:30.470674    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:30.507367    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:30.507378    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:30.511335    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:30.511341    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:33.025702    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:38.026331    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:38.026596    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:38.048389    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:38.048505    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:38.063960    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:38.064063    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:38.076706    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:38.076794    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:38.088622    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:38.088704    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:38.103848    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:38.103918    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:38.114100    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:38.114190    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:38.124977    5271 logs.go:276] 0 containers: []
	W0913 17:17:38.124991    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:38.125051    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:38.135403    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:38.135424    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:38.135429    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:38.149937    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:38.149948    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:38.167939    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:38.167950    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:38.180022    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:38.180033    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:38.218284    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:38.218292    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:38.229603    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:38.229615    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:38.241397    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:38.241409    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:38.255323    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:38.255335    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:38.282924    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:38.282939    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:38.295448    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:38.295459    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:38.318299    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:38.318307    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:38.322572    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:38.322579    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:38.356861    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:38.356872    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:38.370882    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:38.370893    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:38.385355    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:38.385371    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:38.397155    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:38.397164    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:40.918649    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:45.920861    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:45.921101    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:45.936893    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:45.936986    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:45.951979    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:45.952065    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:45.963157    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:45.963244    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:45.973835    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:45.973922    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:45.984946    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:45.985029    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:45.995701    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:45.995782    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:46.005964    5271 logs.go:276] 0 containers: []
	W0913 17:17:46.005977    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:46.006049    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:46.016705    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:46.016721    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:46.016726    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:46.028615    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:46.028627    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:46.041173    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:46.041188    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:46.067346    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:46.067359    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:46.082207    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:46.082222    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:46.097216    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:46.097232    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:46.112077    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:46.112089    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:46.131283    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:46.131293    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:46.145588    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:46.145598    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:46.157398    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:46.157411    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:46.169639    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:46.169649    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:46.184144    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:46.184154    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:46.201380    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:46.201392    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:46.238818    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:46.238832    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:46.275768    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:46.275783    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:46.279754    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:46.279762    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:48.804494    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:17:53.806723    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:17:53.806913    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:17:53.824065    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:17:53.824154    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:17:53.835916    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:17:53.835996    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:17:53.846711    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:17:53.846789    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:17:53.857324    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:17:53.857404    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:17:53.868239    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:17:53.868327    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:17:53.879461    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:17:53.879540    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:17:53.890561    5271 logs.go:276] 0 containers: []
	W0913 17:17:53.890572    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:17:53.890638    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:17:53.901810    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:17:53.901825    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:17:53.901831    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:17:53.940121    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:17:53.940132    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:17:53.982453    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:17:53.982464    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:17:53.996833    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:17:53.996844    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:17:54.011887    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:17:54.011898    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:17:54.025634    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:17:54.025647    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:17:54.040575    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:17:54.040588    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:17:54.052118    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:17:54.052131    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:17:54.064253    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:17:54.064264    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:17:54.078170    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:17:54.078180    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:17:54.095758    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:17:54.095773    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:17:54.108095    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:17:54.108106    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:17:54.112353    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:17:54.112362    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:17:54.126144    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:17:54.126157    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:17:54.150615    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:17:54.150628    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:17:54.181653    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:17:54.181665    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:17:56.696416    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:01.698633    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:01.698803    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:01.712823    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:01.712903    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:01.723733    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:01.723809    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:01.734349    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:01.734434    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:01.751685    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:01.751758    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:01.761967    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:01.762034    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:01.772807    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:01.772887    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:01.782704    5271 logs.go:276] 0 containers: []
	W0913 17:18:01.782715    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:01.782782    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:01.793086    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:01.793107    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:01.793112    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:01.833654    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:01.833664    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:01.848494    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:01.848504    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:01.870297    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:01.870307    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:01.895181    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:01.895191    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:01.909766    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:01.909778    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:01.934641    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:01.934651    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:01.946112    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:01.946125    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:01.958550    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:01.958560    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:01.995244    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:01.995254    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:01.999398    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:01.999412    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:02.013411    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:02.013422    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:02.028750    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:02.028760    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:02.043627    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:02.043638    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:02.057909    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:02.057920    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:02.069625    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:02.069637    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:04.590903    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:09.593053    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:09.593138    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:09.604840    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:09.604924    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:09.624285    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:09.624371    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:09.635227    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:09.635310    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:09.646039    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:09.646128    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:09.660166    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:09.660251    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:09.670868    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:09.670954    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:09.681751    5271 logs.go:276] 0 containers: []
	W0913 17:18:09.681763    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:09.681832    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:09.692127    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:09.692146    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:09.692153    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:09.729598    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:09.729606    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:09.765937    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:09.765951    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:09.779808    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:09.779822    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:09.805426    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:09.805444    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:09.820081    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:09.820096    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:09.831559    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:09.831573    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:09.854688    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:09.854697    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:09.866316    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:09.866330    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:09.882288    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:09.882299    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:09.896117    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:09.896130    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:09.907818    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:09.907831    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:09.919325    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:09.919339    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:09.923445    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:09.923451    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:09.939761    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:09.939773    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:09.959788    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:09.959801    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:12.475024    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:17.477311    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:17.477523    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:17.498819    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:17.498932    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:17.513639    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:17.513737    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:17.526193    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:17.526311    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:17.537566    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:17.537652    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:17.548249    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:17.548332    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:17.559482    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:17.559566    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:17.570323    5271 logs.go:276] 0 containers: []
	W0913 17:18:17.570337    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:17.570404    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:17.581676    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:17.581694    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:17.581699    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:17.618243    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:17.618255    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:17.633331    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:17.633343    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:17.644744    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:17.644758    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:17.656267    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:17.656278    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:17.679364    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:17.679374    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:17.705095    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:17.705110    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:17.717027    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:17.717039    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:17.734491    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:17.734502    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:17.770638    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:17.770649    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:17.785286    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:17.785297    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:17.799638    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:17.799653    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:17.803926    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:17.803933    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:17.820961    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:17.820972    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:17.840450    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:17.840463    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:17.853511    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:17.853525    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:20.367819    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:25.369967    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:25.370139    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:25.387789    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:25.387891    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:25.400148    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:25.400232    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:25.412138    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:25.412219    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:25.422254    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:25.422336    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:25.433389    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:25.433476    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:25.444883    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:25.444970    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:25.455060    5271 logs.go:276] 0 containers: []
	W0913 17:18:25.455075    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:25.455149    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:25.465258    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:25.465276    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:25.465283    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:25.477152    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:25.477164    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:25.489927    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:25.489938    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:25.504640    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:25.504653    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:25.517760    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:25.517772    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:25.531565    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:25.531578    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:25.569097    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:25.569110    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:25.583117    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:25.583129    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:25.594234    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:25.594245    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:25.618541    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:25.618551    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:25.623135    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:25.623144    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:25.657934    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:25.657949    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:25.683531    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:25.683550    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:25.697445    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:25.697456    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:25.712348    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:25.712364    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:25.723654    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:25.723664    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:28.241601    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:33.243883    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:33.244004    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:33.262643    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:33.262728    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:33.273280    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:33.273354    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:33.283466    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:33.283554    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:33.294358    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:33.294443    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:33.304681    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:33.304766    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:33.315797    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:33.315882    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:33.326053    5271 logs.go:276] 0 containers: []
	W0913 17:18:33.326068    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:33.326146    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:33.336374    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:33.336392    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:33.336405    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:33.351666    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:33.351675    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:33.365866    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:33.365879    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:33.380027    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:33.380039    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:33.417431    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:33.417446    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:33.429762    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:33.429773    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:33.443015    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:33.443026    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:33.456297    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:33.456312    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:33.467927    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:33.467938    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:33.493076    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:33.493086    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:33.509837    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:33.509849    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:33.534348    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:33.534357    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:33.539039    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:33.539046    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:33.553715    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:33.553727    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:33.565376    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:33.565392    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:33.576328    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:33.576338    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:36.116964    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:41.119247    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:41.119475    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:41.137006    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:41.137114    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:41.149758    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:41.149841    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:41.160830    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:41.160909    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:41.181502    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:41.181591    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:41.193255    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:41.193338    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:41.204020    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:41.204104    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:41.214659    5271 logs.go:276] 0 containers: []
	W0913 17:18:41.214673    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:41.214741    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:41.225251    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:41.225268    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:41.225274    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:41.229341    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:41.229347    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:41.240635    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:41.240646    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:41.253333    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:41.253344    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:41.291541    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:41.291551    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:41.306173    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:41.306184    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:41.320561    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:41.320572    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:41.334806    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:41.334820    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:41.352576    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:41.352591    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:41.364768    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:41.364780    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:41.388871    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:41.388880    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:41.400451    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:41.400465    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:41.426374    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:41.426388    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:41.438700    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:41.438712    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:41.452528    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:41.452543    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:41.468944    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:41.468955    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:44.005600    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:49.007854    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:49.008021    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:49.020395    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:49.020481    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:49.032707    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:49.032795    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:49.047598    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:49.047672    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:49.061060    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:49.061138    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:49.071234    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:49.071315    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:49.082132    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:49.082219    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:49.093532    5271 logs.go:276] 0 containers: []
	W0913 17:18:49.093543    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:49.093615    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:49.104062    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:49.104104    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:49.104109    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:49.143898    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:49.143910    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:49.159341    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:49.159351    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:49.177934    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:49.177948    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:49.190175    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:49.190190    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:49.206751    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:49.206761    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:49.225788    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:49.225802    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:49.237251    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:49.237262    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:49.262354    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:49.262364    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:49.280111    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:49.280122    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:49.291879    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:49.291891    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:49.316105    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:49.316113    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:49.327306    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:49.327319    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:49.364121    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:49.364132    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:49.368103    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:49.368109    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:49.382737    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:49.382752    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:51.899244    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:18:56.901373    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:18:56.901560    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:18:56.917131    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:18:56.917238    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:18:56.929925    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:18:56.930014    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:18:56.944790    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:18:56.944869    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:18:56.955251    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:18:56.955335    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:18:56.965266    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:18:56.965341    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:18:56.975949    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:18:56.976018    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:18:56.985596    5271 logs.go:276] 0 containers: []
	W0913 17:18:56.985608    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:18:56.985679    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:18:56.996119    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:18:56.996135    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:18:56.996140    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:18:57.014035    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:18:57.014046    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:18:57.026113    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:18:57.026125    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:18:57.030693    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:18:57.030702    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:18:57.044929    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:18:57.044939    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:18:57.056318    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:18:57.056327    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:18:57.074698    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:18:57.074713    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:18:57.098989    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:18:57.099002    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:18:57.110692    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:18:57.110704    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:18:57.124831    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:18:57.124843    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:18:57.149258    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:18:57.149270    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:18:57.166101    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:18:57.166111    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:18:57.202900    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:18:57.202907    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:18:57.237286    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:18:57.237296    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:18:57.252151    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:18:57.252163    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:18:57.266771    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:18:57.266784    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:18:59.780612    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:04.783008    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:04.783436    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:04.812206    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:04.812342    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:04.829998    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:04.830111    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:04.843735    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:04.843832    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:04.855449    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:04.855523    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:04.866773    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:04.866858    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:04.877931    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:04.878009    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:04.892794    5271 logs.go:276] 0 containers: []
	W0913 17:19:04.892807    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:04.892875    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:04.903076    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:04.903093    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:04.903099    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:04.917485    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:04.917494    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:04.930765    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:04.930779    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:04.942842    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:04.942854    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:04.983034    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:04.983046    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:05.004028    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:05.004040    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:05.019040    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:05.019050    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:05.031107    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:05.031119    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:05.042816    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:05.042829    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:05.080643    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:05.080654    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:05.094508    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:05.094524    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:05.119754    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:05.119765    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:05.131562    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:05.131575    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:05.149176    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:05.149186    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:05.172857    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:05.172868    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:05.176852    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:05.176862    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:07.696970    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:12.699273    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:12.699516    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:12.722004    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:12.722120    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:12.737489    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:12.737580    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:12.749605    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:12.749697    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:12.761101    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:12.761195    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:12.771420    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:12.771509    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:12.785841    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:12.785927    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:12.795932    5271 logs.go:276] 0 containers: []
	W0913 17:19:12.795946    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:12.796016    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:12.806893    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:12.806914    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:12.806921    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:12.818741    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:12.818754    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:12.842623    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:12.842633    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:12.854569    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:12.854580    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:12.894215    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:12.894254    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:12.929086    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:12.929097    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:12.933397    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:12.933404    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:12.958677    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:12.958688    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:12.973676    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:12.973687    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:12.987418    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:12.987435    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:13.001034    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:13.001046    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:13.012852    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:13.012863    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:13.029719    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:13.029730    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:13.044953    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:13.044964    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:13.070824    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:13.070839    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:13.106166    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:13.106180    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:15.621267    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:20.623499    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:20.623611    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:20.635235    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:20.635327    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:20.645850    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:20.645935    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:20.657077    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:20.657151    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:20.671564    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:20.671651    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:20.682731    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:20.682804    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:20.693460    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:20.693533    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:20.707554    5271 logs.go:276] 0 containers: []
	W0913 17:19:20.707565    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:20.707628    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:20.717555    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:20.717572    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:20.717579    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:20.733002    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:20.733011    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:20.744677    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:20.744689    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:20.756749    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:20.756761    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:20.781283    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:20.781295    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:20.795689    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:20.795703    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:20.810808    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:20.810821    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:20.828722    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:20.828732    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:20.863679    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:20.863692    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:20.877824    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:20.877837    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:20.893106    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:20.893120    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:20.915887    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:20.915896    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:20.920001    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:20.920006    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:20.933881    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:20.933895    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:20.946756    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:20.946767    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:20.958319    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:20.958332    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:23.497429    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:28.499608    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:28.499800    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:28.512969    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:28.513062    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:28.523741    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:28.523823    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:28.534993    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:28.535071    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:28.545513    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:28.545598    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:28.556225    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:28.556305    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:28.566487    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:28.566576    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:28.577205    5271 logs.go:276] 0 containers: []
	W0913 17:19:28.577217    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:28.577289    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:28.587409    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:28.587426    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:28.587431    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:28.611322    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:28.611336    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:28.623531    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:28.623544    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:28.649439    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:28.649450    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:28.663601    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:28.663610    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:28.674940    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:28.674952    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:28.687126    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:28.687137    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:28.725698    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:28.725707    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:28.760273    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:28.760288    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:28.778796    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:28.778806    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:28.791517    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:28.791527    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:28.795728    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:28.795734    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:28.822791    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:28.822802    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:28.844764    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:28.844775    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:28.859889    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:28.859902    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:28.871840    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:28.871853    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:31.383419    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:36.385717    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:36.385961    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:36.409818    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:36.409954    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:36.426310    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:36.426399    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:36.439455    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:36.439533    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:36.450847    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:36.450928    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:36.461356    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:36.461437    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:36.471697    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:36.471786    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:36.482354    5271 logs.go:276] 0 containers: []
	W0913 17:19:36.482368    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:36.482437    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:36.493197    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:36.493212    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:36.493217    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:36.504265    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:36.504278    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:36.522820    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:36.522829    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:36.546498    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:36.546511    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:36.550823    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:36.550833    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:36.585747    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:36.585761    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:36.597470    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:36.597481    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:36.634468    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:36.634482    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:36.648952    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:36.648964    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:36.661630    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:36.661641    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:36.677198    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:36.677211    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:36.689254    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:36.689269    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:36.706191    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:36.706200    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:36.721307    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:36.721318    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:36.735082    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:36.735096    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:36.749856    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:36.749870    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:39.276660    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:44.278901    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:44.279139    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:19:44.301754    5271 logs.go:276] 2 containers: [b2e8459e4cd9 c4642c4570af]
	I0913 17:19:44.301879    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:19:44.317480    5271 logs.go:276] 2 containers: [9ba22d798507 b1a82bf46d1b]
	I0913 17:19:44.317578    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:19:44.329690    5271 logs.go:276] 1 containers: [b8aaea0adda8]
	I0913 17:19:44.329772    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:19:44.341615    5271 logs.go:276] 2 containers: [2869f98b5fca bae4a9a1e6b5]
	I0913 17:19:44.341700    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:19:44.352416    5271 logs.go:276] 1 containers: [3a8d576da8cd]
	I0913 17:19:44.352495    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:19:44.363893    5271 logs.go:276] 2 containers: [b9f8bb22fd83 82408eec4148]
	I0913 17:19:44.363974    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:19:44.373922    5271 logs.go:276] 0 containers: []
	W0913 17:19:44.373933    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:19:44.373999    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:19:44.384649    5271 logs.go:276] 1 containers: [5b6e0dea8170]
	I0913 17:19:44.384667    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:19:44.384673    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:19:44.388909    5271 logs.go:123] Gathering logs for coredns [b8aaea0adda8] ...
	I0913 17:19:44.388919    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b8aaea0adda8"
	I0913 17:19:44.400118    5271 logs.go:123] Gathering logs for kube-scheduler [2869f98b5fca] ...
	I0913 17:19:44.400130    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2869f98b5fca"
	I0913 17:19:44.412035    5271 logs.go:123] Gathering logs for kube-scheduler [bae4a9a1e6b5] ...
	I0913 17:19:44.412050    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bae4a9a1e6b5"
	I0913 17:19:44.428640    5271 logs.go:123] Gathering logs for kube-proxy [3a8d576da8cd] ...
	I0913 17:19:44.428650    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a8d576da8cd"
	I0913 17:19:44.440495    5271 logs.go:123] Gathering logs for kube-controller-manager [82408eec4148] ...
	I0913 17:19:44.440506    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82408eec4148"
	I0913 17:19:44.454308    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:19:44.454319    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:19:44.477697    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:19:44.477705    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:19:44.491822    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:19:44.491832    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:19:44.530935    5271 logs.go:123] Gathering logs for kube-apiserver [b2e8459e4cd9] ...
	I0913 17:19:44.530948    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e8459e4cd9"
	I0913 17:19:44.545743    5271 logs.go:123] Gathering logs for etcd [9ba22d798507] ...
	I0913 17:19:44.545754    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ba22d798507"
	I0913 17:19:44.559460    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:19:44.559472    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:19:44.594069    5271 logs.go:123] Gathering logs for etcd [b1a82bf46d1b] ...
	I0913 17:19:44.594079    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1a82bf46d1b"
	I0913 17:19:44.608382    5271 logs.go:123] Gathering logs for storage-provisioner [5b6e0dea8170] ...
	I0913 17:19:44.608398    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b6e0dea8170"
	I0913 17:19:44.621732    5271 logs.go:123] Gathering logs for kube-apiserver [c4642c4570af] ...
	I0913 17:19:44.621748    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4642c4570af"
	I0913 17:19:44.648738    5271 logs.go:123] Gathering logs for kube-controller-manager [b9f8bb22fd83] ...
	I0913 17:19:44.648757    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9f8bb22fd83"
	I0913 17:19:47.169489    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:19:52.171782    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:19:52.171844    5271 kubeadm.go:597] duration metric: took 4m3.193140333s to restartPrimaryControlPlane
	W0913 17:19:52.171892    5271 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0913 17:19:52.171915    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0913 17:19:53.198581    5271 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.026669916s)
	I0913 17:19:53.198654    5271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 17:19:53.203661    5271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 17:19:53.206529    5271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 17:19:53.209378    5271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 17:19:53.209384    5271 kubeadm.go:157] found existing configuration files:
	
	I0913 17:19:53.209416    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf
	I0913 17:19:53.211955    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 17:19:53.211980    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 17:19:53.214880    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf
	I0913 17:19:53.217666    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 17:19:53.217695    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 17:19:53.220244    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf
	I0913 17:19:53.222777    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 17:19:53.222800    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 17:19:53.225729    5271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf
	I0913 17:19:53.228212    5271 kubeadm.go:163] "https://control-plane.minikube.internal:50503" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50503 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 17:19:53.228236    5271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 17:19:53.230831    5271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 17:19:53.249237    5271 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0913 17:19:53.249267    5271 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 17:19:53.300503    5271 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 17:19:53.300579    5271 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 17:19:53.300634    5271 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0913 17:19:53.348560    5271 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 17:19:53.352744    5271 out.go:235]   - Generating certificates and keys ...
	I0913 17:19:53.352779    5271 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 17:19:53.352816    5271 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 17:19:53.352858    5271 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0913 17:19:53.352892    5271 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0913 17:19:53.352957    5271 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0913 17:19:53.352987    5271 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0913 17:19:53.353018    5271 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0913 17:19:53.353051    5271 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0913 17:19:53.353095    5271 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0913 17:19:53.353140    5271 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0913 17:19:53.353160    5271 kubeadm.go:310] [certs] Using the existing "sa" key
	I0913 17:19:53.353187    5271 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 17:19:53.542641    5271 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 17:19:53.612353    5271 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 17:19:53.823333    5271 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 17:19:53.914709    5271 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 17:19:53.945786    5271 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 17:19:53.946120    5271 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 17:19:53.946144    5271 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 17:19:54.034729    5271 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 17:19:54.038893    5271 out.go:235]   - Booting up control plane ...
	I0913 17:19:54.038937    5271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 17:19:54.040240    5271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 17:19:54.040729    5271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 17:19:54.040976    5271 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 17:19:54.041831    5271 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0913 17:19:58.543743    5271 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.501712 seconds
	I0913 17:19:58.543804    5271 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 17:19:58.547465    5271 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 17:19:59.061848    5271 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 17:19:59.062015    5271 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-434000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 17:19:59.569779    5271 kubeadm.go:310] [bootstrap-token] Using token: 979w3e.9if25wzhtorqg6a9
	I0913 17:19:59.573413    5271 out.go:235]   - Configuring RBAC rules ...
	I0913 17:19:59.573496    5271 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 17:19:59.573551    5271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 17:19:59.576084    5271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 17:19:59.577265    5271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 17:19:59.578477    5271 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 17:19:59.579842    5271 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 17:19:59.584109    5271 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 17:19:59.746378    5271 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 17:19:59.974894    5271 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 17:19:59.975378    5271 kubeadm.go:310] 
	I0913 17:19:59.975415    5271 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 17:19:59.975422    5271 kubeadm.go:310] 
	I0913 17:19:59.975473    5271 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 17:19:59.975478    5271 kubeadm.go:310] 
	I0913 17:19:59.975494    5271 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 17:19:59.975522    5271 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 17:19:59.975552    5271 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 17:19:59.975556    5271 kubeadm.go:310] 
	I0913 17:19:59.975581    5271 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 17:19:59.975584    5271 kubeadm.go:310] 
	I0913 17:19:59.975611    5271 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 17:19:59.975614    5271 kubeadm.go:310] 
	I0913 17:19:59.975649    5271 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 17:19:59.975698    5271 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 17:19:59.975737    5271 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 17:19:59.975741    5271 kubeadm.go:310] 
	I0913 17:19:59.975791    5271 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 17:19:59.975833    5271 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 17:19:59.975836    5271 kubeadm.go:310] 
	I0913 17:19:59.975883    5271 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 979w3e.9if25wzhtorqg6a9 \
	I0913 17:19:59.975939    5271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 \
	I0913 17:19:59.975950    5271 kubeadm.go:310] 	--control-plane 
	I0913 17:19:59.975954    5271 kubeadm.go:310] 
	I0913 17:19:59.976009    5271 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 17:19:59.976013    5271 kubeadm.go:310] 
	I0913 17:19:59.976065    5271 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 979w3e.9if25wzhtorqg6a9 \
	I0913 17:19:59.976129    5271 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:446f8f90cde123cbedc005b3a5de5af09ada936a0c1ba8e89eedb16e20223601 
	I0913 17:19:59.976409    5271 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 17:19:59.976419    5271 cni.go:84] Creating CNI manager for ""
	I0913 17:19:59.976428    5271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:19:59.979635    5271 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 17:19:59.982548    5271 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 17:19:59.985613    5271 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 17:19:59.990261    5271 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 17:19:59.990308    5271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 17:19:59.990388    5271 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-434000 minikube.k8s.io/updated_at=2024_09_13T17_19_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=stopped-upgrade-434000 minikube.k8s.io/primary=true
	I0913 17:20:00.019783    5271 kubeadm.go:1113] duration metric: took 29.51425ms to wait for elevateKubeSystemPrivileges
	I0913 17:20:00.019798    5271 ops.go:34] apiserver oom_adj: -16
	I0913 17:20:00.029195    5271 kubeadm.go:394] duration metric: took 4m11.063679875s to StartCluster
	I0913 17:20:00.029213    5271 settings.go:142] acquiring lock: {Name:mk948e653988f014de7183ca44ad61265c2dc06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:20:00.029306    5271 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:20:00.029713    5271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/kubeconfig: {Name:mke2b016812cedc34ffbfc79dbc5c22d8c43c377 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:20:00.029920    5271 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:20:00.029931    5271 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0913 17:20:00.029972    5271 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-434000"
	I0913 17:20:00.029983    5271 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-434000"
	W0913 17:20:00.029989    5271 addons.go:243] addon storage-provisioner should already be in state true
	I0913 17:20:00.030001    5271 host.go:66] Checking if "stopped-upgrade-434000" exists ...
	I0913 17:20:00.030030    5271 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:20:00.030030    5271 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-434000"
	I0913 17:20:00.030071    5271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-434000"
	I0913 17:20:00.030978    5271 kapi.go:59] client config for stopped-upgrade-434000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/stopped-upgrade-434000/client.key", CAFile:"/Users/jenkins/minikube-integration/19640-1360/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102685800), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0913 17:20:00.031105    5271 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-434000"
	W0913 17:20:00.031109    5271 addons.go:243] addon default-storageclass should already be in state true
	I0913 17:20:00.031115    5271 host.go:66] Checking if "stopped-upgrade-434000" exists ...
	I0913 17:20:00.033446    5271 out.go:177] * Verifying Kubernetes components...
	I0913 17:20:00.033824    5271 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 17:20:00.037735    5271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 17:20:00.037742    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:20:00.041381    5271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 17:20:00.045449    5271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 17:20:00.049360    5271 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 17:20:00.049367    5271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 17:20:00.049374    5271 sshutil.go:53] new ssh client: &{IP:localhost Port:50468 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/stopped-upgrade-434000/id_rsa Username:docker}
	I0913 17:20:00.130387    5271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 17:20:00.137736    5271 api_server.go:52] waiting for apiserver process to appear ...
	I0913 17:20:00.137793    5271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 17:20:00.141742    5271 api_server.go:72] duration metric: took 111.812666ms to wait for apiserver process to appear ...
	I0913 17:20:00.141753    5271 api_server.go:88] waiting for apiserver healthz status ...
	I0913 17:20:00.141760    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:00.187232    5271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 17:20:00.201972    5271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 17:20:00.522102    5271 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0913 17:20:00.522114    5271 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0913 17:20:05.141795    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:05.141816    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:10.143682    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:10.143723    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:15.143892    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:15.143915    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:20.144608    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:20.144650    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:25.145132    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:25.145172    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:30.145835    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:30.145877    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0913 17:20:30.523934    5271 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0913 17:20:30.527240    5271 out.go:177] * Enabled addons: storage-provisioner
	I0913 17:20:30.534117    5271 addons.go:510] duration metric: took 30.504648875s for enable addons: enabled=[storage-provisioner]
	I0913 17:20:35.146949    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:35.146970    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:40.148044    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:40.148085    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:45.149568    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:45.149613    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:50.151416    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:50.151455    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:20:55.153629    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:20:55.153651    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:21:00.155748    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:21:00.155901    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:21:00.174430    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:21:00.174518    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:21:00.185357    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:21:00.185443    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:21:00.195751    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:21:00.195832    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:21:00.206285    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:21:00.206361    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:21:00.216323    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:21:00.216400    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:21:00.231433    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:21:00.231515    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:21:00.241471    5271 logs.go:276] 0 containers: []
	W0913 17:21:00.241481    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:21:00.241548    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:21:00.251311    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:21:00.251332    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:21:00.251338    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:21:00.286823    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:21:00.286835    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:21:00.308918    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:21:00.308931    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:21:00.323032    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:21:00.323046    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:21:00.334483    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:21:00.334493    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:21:00.348437    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:21:00.348447    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:21:00.360774    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:21:00.360785    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:21:00.396375    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:21:00.396383    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:21:00.407581    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:21:00.407592    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:21:00.424763    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:21:00.424774    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:21:00.436091    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:21:00.436101    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:21:00.459773    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:21:00.459785    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:21:00.471180    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:21:00.471194    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:21:02.977762    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:21:07.980106    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:21:07.980740    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:21:08.018655    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:21:08.018782    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:21:08.038451    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:21:08.038568    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:21:08.053540    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:21:08.053627    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:21:08.065985    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:21:08.066064    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:21:08.077040    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:21:08.077117    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:21:08.088206    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:21:08.088282    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:21:08.098941    5271 logs.go:276] 0 containers: []
	W0913 17:21:08.098953    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:21:08.099028    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:21:08.112451    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:21:08.112469    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:21:08.112475    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:21:08.148741    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:21:08.148752    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:21:08.152880    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:21:08.152889    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:21:08.166805    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:21:08.166818    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:21:08.180792    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:21:08.180803    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:21:08.192730    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:21:08.192743    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:21:08.212246    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:21:08.212258    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:21:08.223635    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:21:08.223647    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:21:08.260719    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:21:08.260731    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:21:08.272383    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:21:08.272396    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:21:08.286693    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:21:08.286704    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:21:08.298090    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:21:08.298101    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:21:08.309684    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:21:08.309697    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:21:10.834778    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:21:15.837457    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:21:15.837661    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:21:15.855464    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:21:15.855569    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:21:15.869724    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:21:15.869806    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:21:15.881696    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:21:15.881773    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:21:15.891960    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:21:15.892037    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:21:15.907196    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:21:15.907277    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:21:15.924640    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:21:15.924709    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:21:15.934404    5271 logs.go:276] 0 containers: []
	W0913 17:21:15.934415    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:21:15.934481    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:21:15.944391    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:21:15.944405    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:21:15.944411    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:21:15.955243    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:21:15.955252    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:21:15.967019    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:21:15.967029    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:21:15.985790    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:21:15.985802    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:21:15.997842    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:21:15.997853    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:21:16.016437    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:21:16.016447    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:21:16.029945    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:21:16.029957    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:21:16.070425    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:21:16.070440    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:21:16.085159    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:21:16.085168    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:21:16.097468    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:21:16.097478    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:21:16.120716    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:21:16.120726    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:21:16.145709    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:21:16.145724    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:21:16.179830    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:21:16.179838    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:21:18.684482    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:21:23.687325    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:21:23.687888    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:21:23.729239    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:21:23.729392    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:21:23.751030    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:21:23.751168    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:21:23.765817    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:21:23.765913    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:21:23.778238    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:21:23.778314    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:21:23.789505    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:21:23.789580    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:21:23.800109    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:21:23.800187    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:21:23.810049    5271 logs.go:276] 0 containers: []
	W0913 17:21:23.810059    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:21:23.810129    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:21:23.821915    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:21:23.821931    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:21:23.821936    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:21:23.855927    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:21:23.855938    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:21:23.889757    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:21:23.889772    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:21:23.901403    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:21:23.901416    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:21:23.915439    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:21:23.915449    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:21:23.926942    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:21:23.926952    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:21:23.944578    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:21:23.944588    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:21:23.948675    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:21:23.948684    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:21:23.965157    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:21:23.965169    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:21:23.978553    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:21:23.978566    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:21:23.989761    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:21:23.989772    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:21:24.000731    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:21:24.000744    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:21:24.023857    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:21:24.023865    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:21:26.537568    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:21:31.540139    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:21:31.540629    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:21:31.580701    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:21:31.580854    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:21:31.603215    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:21:31.603344    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:21:31.618492    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:21:31.618579    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:21:31.630677    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:21:31.630761    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:21:31.645430    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:21:31.645513    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:21:31.655842    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:21:31.655918    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:21:31.666013    5271 logs.go:276] 0 containers: []
	W0913 17:21:31.666025    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:21:31.666099    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:21:31.676967    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:21:31.676984    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:21:31.676989    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:21:31.690772    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:21:31.690784    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:21:31.705504    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:21:31.705517    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:21:31.722560    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:21:31.722571    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:21:31.733722    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:21:31.733736    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:21:31.756808    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:21:31.756816    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:21:31.767693    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:21:31.767702    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:21:31.801380    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:21:31.801387    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:21:31.805330    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:21:31.805338    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:21:31.840484    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:21:31.840496    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:21:31.857049    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:21:31.857060    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:21:31.867927    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:21:31.867937    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:21:31.878911    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:21:31.878922    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:21:34.392071    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:21:39.394472    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:21:39.395020    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:21:39.432789    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:21:39.432939    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:21:39.454183    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:21:39.454331    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:21:39.468742    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:21:39.468840    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:21:39.481076    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:21:39.481151    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:21:39.491944    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:21:39.492028    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:21:39.503148    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:21:39.503230    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:21:39.513366    5271 logs.go:276] 0 containers: []
	W0913 17:21:39.513376    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:21:39.513447    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:21:39.523759    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:21:39.523775    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:21:39.523780    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:21:39.535221    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:21:39.535230    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:21:39.554531    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:21:39.554541    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:21:39.577730    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:21:39.577737    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:21:39.581606    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:21:39.581615    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:21:39.617129    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:21:39.617144    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:21:39.628668    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:21:39.628682    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:21:39.640485    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:21:39.640498    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:21:39.654610    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:21:39.654621    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:21:39.667979    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:21:39.667994    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:21:39.704929    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:21:39.704944    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:21:39.719167    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:21:39.719176    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:21:39.741585    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:21:39.741597    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:21:42.254471    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:21:47.256742    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:21:47.257122    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:21:47.288206    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:21:47.288353    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:21:47.306399    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:21:47.306497    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:21:47.320013    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:21:47.320094    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:21:47.332356    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:21:47.332438    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:21:47.343002    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:21:47.343081    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:21:47.353235    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:21:47.353309    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:21:47.363760    5271 logs.go:276] 0 containers: []
	W0913 17:21:47.363776    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:21:47.363843    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:21:47.377924    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:21:47.377937    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:21:47.377942    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:21:47.392169    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:21:47.392182    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:21:47.406171    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:21:47.406184    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:21:47.417411    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:21:47.417420    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:21:47.431603    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:21:47.431615    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:21:47.449006    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:21:47.449015    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:21:47.461035    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:21:47.461046    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:21:47.465129    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:21:47.465138    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:21:47.498374    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:21:47.498388    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:21:47.522699    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:21:47.522707    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:21:47.533917    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:21:47.533932    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:21:47.545494    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:21:47.545507    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:21:47.579530    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:21:47.579544    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:21:50.094675    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:21:55.097177    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:21:55.097689    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:21:55.137932    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:21:55.138079    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:21:55.157322    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:21:55.157448    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:21:55.174586    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:21:55.174670    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:21:55.185899    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:21:55.185977    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:21:55.195957    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:21:55.196043    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:21:55.206681    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:21:55.206763    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:21:55.216449    5271 logs.go:276] 0 containers: []
	W0913 17:21:55.216470    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:21:55.216530    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:21:55.226585    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:21:55.226604    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:21:55.226611    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:21:55.230943    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:21:55.230952    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:21:55.242168    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:21:55.242181    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:21:55.256381    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:21:55.256396    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:21:55.268503    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:21:55.268513    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:21:55.286365    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:21:55.286376    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:21:55.297784    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:21:55.297801    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:21:55.333186    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:21:55.333194    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:21:55.366711    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:21:55.366725    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:21:55.380948    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:21:55.380964    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:21:55.394994    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:21:55.395003    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:21:55.406376    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:21:55.406386    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:21:55.430468    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:21:55.430475    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:21:57.947824    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:22:02.948951    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:22:02.949234    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:22:02.975011    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:22:02.975146    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:22:02.995638    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:22:02.995723    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:22:03.007959    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:22:03.008049    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:22:03.018556    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:22:03.018637    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:22:03.028448    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:22:03.028525    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:22:03.038484    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:22:03.038557    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:22:03.048815    5271 logs.go:276] 0 containers: []
	W0913 17:22:03.048829    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:22:03.048891    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:22:03.058902    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:22:03.058917    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:22:03.058923    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:22:03.070635    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:22:03.070651    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:22:03.087633    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:22:03.087642    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:22:03.121195    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:22:03.121205    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:22:03.135587    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:22:03.135601    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:22:03.149809    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:22:03.149822    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:22:03.160635    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:22:03.160648    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:22:03.176390    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:22:03.176404    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:22:03.191484    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:22:03.191497    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:22:03.206538    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:22:03.206551    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:22:03.231860    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:22:03.231877    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:22:03.243517    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:22:03.243531    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:22:03.248055    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:22:03.248063    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:22:05.788942    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:22:10.791679    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:22:10.792223    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:22:10.834486    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:22:10.834646    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:22:10.855579    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:22:10.855685    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:22:10.874624    5271 logs.go:276] 2 containers: [2175886fa9aa 26dc6ce50ada]
	I0913 17:22:10.874694    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:22:10.889664    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:22:10.889746    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:22:10.899815    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:22:10.899892    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:22:10.914626    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:22:10.914706    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:22:10.924937    5271 logs.go:276] 0 containers: []
	W0913 17:22:10.924953    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:22:10.925015    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:22:10.935108    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:22:10.935122    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:22:10.935127    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:22:10.952047    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:22:10.952057    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:22:10.977005    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:22:10.977015    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:22:10.988831    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:22:10.988840    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:22:11.024609    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:22:11.024621    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:22:11.042632    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:22:11.042645    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:22:11.054152    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:22:11.054167    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:22:11.065683    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:22:11.065692    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:22:11.079816    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:22:11.079826    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:22:11.095943    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:22:11.095957    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:22:11.108152    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:22:11.108165    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:22:11.142586    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:22:11.142597    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:22:11.146743    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:22:11.146750    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:22:13.666572    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:22:18.669034    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:22:18.669128    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:22:18.680236    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:22:18.680312    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:22:18.691414    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:22:18.691500    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:22:18.702133    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:22:18.702211    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:22:18.712127    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:22:18.712206    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:22:18.722660    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:22:18.722736    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:22:18.732705    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:22:18.732773    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:22:18.743186    5271 logs.go:276] 0 containers: []
	W0913 17:22:18.743197    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:22:18.743261    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:22:18.753276    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:22:18.753293    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:22:18.753299    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:22:18.757527    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:22:18.757536    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:22:18.771541    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:22:18.771551    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:22:18.782812    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:22:18.782823    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:22:18.799848    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:22:18.799861    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:22:18.835652    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:22:18.835663    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:22:18.869565    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:22:18.869580    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:22:18.885895    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:22:18.885906    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:22:18.897564    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:22:18.897575    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:22:18.921048    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:22:18.921055    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:22:18.931938    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:22:18.931950    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:22:18.950073    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:22:18.950083    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:22:18.964017    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:22:18.964027    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:22:18.975570    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:22:18.975585    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:22:18.987157    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:22:18.987170    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:22:21.501241    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:22:26.503965    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:22:26.504581    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:22:26.544748    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:22:26.544905    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:22:26.565127    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:22:26.565239    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:22:26.581887    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:22:26.581980    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:22:26.594123    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:22:26.594199    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:22:26.605090    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:22:26.605172    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:22:26.618635    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:22:26.618703    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:22:26.631724    5271 logs.go:276] 0 containers: []
	W0913 17:22:26.631736    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:22:26.631807    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:22:26.642899    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:22:26.642918    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:22:26.642924    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:22:26.647600    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:22:26.647609    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:22:26.662347    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:22:26.662358    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:22:26.673810    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:22:26.673820    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:22:26.709162    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:22:26.709171    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:22:26.722872    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:22:26.722886    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:22:26.734016    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:22:26.734028    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:22:26.745688    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:22:26.745701    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:22:26.757187    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:22:26.757196    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:22:26.769535    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:22:26.769548    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:22:26.786046    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:22:26.786058    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:22:26.800105    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:22:26.800113    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:22:26.817347    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:22:26.817357    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:22:26.850624    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:22:26.850636    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:22:26.866137    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:22:26.866149    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:22:29.390744    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:22:34.392998    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:22:34.393567    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:22:34.433565    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:22:34.433719    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:22:34.454279    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:22:34.454401    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:22:34.469499    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:22:34.469586    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:22:34.481968    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:22:34.482044    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:22:34.493026    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:22:34.493106    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:22:34.506122    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:22:34.506198    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:22:34.516447    5271 logs.go:276] 0 containers: []
	W0913 17:22:34.516461    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:22:34.516528    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:22:34.527774    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:22:34.527792    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:22:34.527801    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:22:34.542710    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:22:34.542723    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:22:34.567691    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:22:34.567700    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:22:34.578960    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:22:34.578972    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:22:34.614361    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:22:34.614371    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:22:34.649354    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:22:34.649372    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:22:34.661111    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:22:34.661124    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:22:34.681336    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:22:34.681353    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:22:34.685939    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:22:34.685947    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:22:34.699605    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:22:34.699616    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:22:34.711228    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:22:34.711239    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:22:34.725926    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:22:34.725939    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:22:34.736947    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:22:34.736956    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:22:34.748656    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:22:34.748664    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:22:34.761019    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:22:34.761030    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:22:37.274865    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:22:42.275740    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:22:42.275831    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:22:42.287871    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:22:42.287957    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:22:42.299899    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:22:42.299962    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:22:42.310327    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:22:42.310404    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:22:42.321937    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:22:42.322028    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:22:42.333820    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:22:42.333883    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:22:42.345487    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:22:42.345554    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:22:42.356078    5271 logs.go:276] 0 containers: []
	W0913 17:22:42.356090    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:22:42.356147    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:22:42.368115    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:22:42.368130    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:22:42.368136    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:22:42.384750    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:22:42.384761    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:22:42.399004    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:22:42.399017    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:22:42.411453    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:22:42.411465    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:22:42.425873    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:22:42.425886    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:22:42.437752    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:22:42.437762    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:22:42.473390    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:22:42.473405    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:22:42.491317    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:22:42.491330    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:22:42.503930    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:22:42.503942    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:22:42.510419    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:22:42.510432    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:22:42.563725    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:22:42.563736    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:22:42.576468    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:22:42.576479    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:22:42.590202    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:22:42.590214    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:22:42.609017    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:22:42.609026    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:22:42.622980    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:22:42.622995    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:22:45.150565    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:22:50.152810    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:22:50.152979    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:22:50.189754    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:22:50.189852    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:22:50.207096    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:22:50.207197    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:22:50.221575    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:22:50.221668    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:22:50.239993    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:22:50.240081    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:22:50.252313    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:22:50.252397    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:22:50.264618    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:22:50.264704    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:22:50.276496    5271 logs.go:276] 0 containers: []
	W0913 17:22:50.276511    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:22:50.276594    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:22:50.288340    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:22:50.288360    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:22:50.288366    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:22:50.293493    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:22:50.293511    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:22:50.333473    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:22:50.333484    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:22:50.347712    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:22:50.347723    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:22:50.373695    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:22:50.373703    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:22:50.409520    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:22:50.409527    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:22:50.424508    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:22:50.424523    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:22:50.439049    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:22:50.439058    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:22:50.453530    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:22:50.453544    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:22:50.465539    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:22:50.465551    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:22:50.477348    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:22:50.477360    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:22:50.488964    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:22:50.488973    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:22:50.501077    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:22:50.501089    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:22:50.522417    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:22:50.522427    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:22:50.534726    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:22:50.534738    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:22:53.048952    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:22:58.051223    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:22:58.051770    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:22:58.090589    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:22:58.090744    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:22:58.114775    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:22:58.114889    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:22:58.129773    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:22:58.129867    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:22:58.152631    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:22:58.152705    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:22:58.167261    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:22:58.167348    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:22:58.179111    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:22:58.179187    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:22:58.189544    5271 logs.go:276] 0 containers: []
	W0913 17:22:58.189558    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:22:58.189629    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:22:58.200054    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:22:58.200072    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:22:58.200079    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:22:58.217127    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:22:58.217141    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:22:58.232541    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:22:58.232556    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:22:58.244991    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:22:58.245005    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:22:58.256740    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:22:58.256751    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:22:58.270974    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:22:58.270984    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:22:58.305835    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:22:58.305846    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:22:58.309839    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:22:58.309849    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:22:58.343292    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:22:58.343303    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:22:58.357666    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:22:58.357676    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:22:58.382231    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:22:58.382240    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:22:58.399660    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:22:58.399672    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:22:58.422649    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:22:58.422663    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:22:58.434353    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:22:58.434364    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:22:58.451316    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:22:58.451327    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:23:00.964319    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:23:05.966683    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:23:05.966774    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:23:05.979241    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:23:05.979327    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:23:05.990894    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:23:05.990967    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:23:06.002485    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:23:06.002571    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:23:06.013673    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:23:06.013741    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:23:06.025954    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:23:06.026017    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:23:06.036520    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:23:06.036586    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:23:06.051153    5271 logs.go:276] 0 containers: []
	W0913 17:23:06.051163    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:23:06.051219    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:23:06.066983    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:23:06.067002    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:23:06.067008    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:23:06.092643    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:23:06.092666    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:23:06.105410    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:23:06.105423    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:23:06.120027    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:23:06.120038    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:23:06.132037    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:23:06.132048    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:23:06.167267    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:23:06.167280    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:23:06.182120    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:23:06.182132    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:23:06.195175    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:23:06.195187    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:23:06.200909    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:23:06.200920    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:23:06.237082    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:23:06.237094    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:23:06.253165    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:23:06.253179    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:23:06.278893    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:23:06.278905    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:23:06.292809    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:23:06.292821    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:23:06.307238    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:23:06.307255    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:23:06.322672    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:23:06.322685    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:23:08.835306    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:23:13.838048    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:23:13.838311    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:23:13.857384    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:23:13.857482    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:23:13.870925    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:23:13.871014    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:23:13.883078    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:23:13.883165    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:23:13.893485    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:23:13.893551    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:23:13.904270    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:23:13.904333    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:23:13.915656    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:23:13.915738    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:23:13.925508    5271 logs.go:276] 0 containers: []
	W0913 17:23:13.925521    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:23:13.925592    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:23:13.935554    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:23:13.935571    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:23:13.935576    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:23:13.971466    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:23:13.971476    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:23:13.983010    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:23:13.983022    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:23:13.993824    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:23:13.993837    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:23:14.005706    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:23:14.005717    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:23:14.030093    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:23:14.030103    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:23:14.044444    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:23:14.044454    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:23:14.049098    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:23:14.049106    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:23:14.063323    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:23:14.063335    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:23:14.075744    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:23:14.075758    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:23:14.087321    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:23:14.087333    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:23:14.104675    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:23:14.104684    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:23:14.117582    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:23:14.117597    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:23:14.152836    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:23:14.152850    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:23:14.166531    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:23:14.166544    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:23:16.683653    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:23:21.686424    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:23:21.686729    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:23:21.715941    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:23:21.716078    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:23:21.734472    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:23:21.734567    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:23:21.748504    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:23:21.748592    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:23:21.761258    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:23:21.761332    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:23:21.771743    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:23:21.771819    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:23:21.782264    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:23:21.782336    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:23:21.792207    5271 logs.go:276] 0 containers: []
	W0913 17:23:21.792224    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:23:21.792295    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:23:21.802790    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:23:21.802807    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:23:21.802812    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:23:21.838364    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:23:21.838375    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:23:21.852717    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:23:21.852727    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:23:21.864768    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:23:21.864784    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:23:21.869194    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:23:21.869203    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:23:21.902807    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:23:21.902820    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:23:21.914891    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:23:21.914904    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:23:21.926042    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:23:21.926055    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:23:21.940241    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:23:21.940252    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:23:21.957661    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:23:21.957670    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:23:21.969750    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:23:21.969761    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:23:21.981304    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:23:21.981314    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:23:22.004538    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:23:22.004546    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:23:22.016350    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:23:22.016361    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:23:22.031205    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:23:22.031219    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:23:24.544601    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:23:29.545970    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:23:29.546049    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:23:29.558675    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:23:29.558734    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:23:29.569551    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:23:29.569624    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:23:29.580617    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:23:29.580701    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:23:29.592256    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:23:29.592337    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:23:29.603824    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:23:29.603892    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:23:29.615628    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:23:29.615724    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:23:29.626802    5271 logs.go:276] 0 containers: []
	W0913 17:23:29.626814    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:23:29.626884    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:23:29.638183    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:23:29.638206    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:23:29.638213    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:23:29.651375    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:23:29.651388    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:23:29.665567    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:23:29.665583    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:23:29.678015    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:23:29.678028    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:23:29.704114    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:23:29.704128    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:23:29.716924    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:23:29.716941    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:23:29.729632    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:23:29.729644    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:23:29.767745    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:23:29.767759    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:23:29.783446    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:23:29.783458    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:23:29.802068    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:23:29.802080    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:23:29.837572    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:23:29.837588    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:23:29.852507    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:23:29.852516    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:23:29.867420    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:23:29.867434    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:23:29.872924    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:23:29.872937    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:23:29.886151    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:23:29.886162    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:23:32.400053    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:23:37.401360    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:23:37.401968    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:23:37.443270    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:23:37.443442    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:23:37.465201    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:23:37.465344    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:23:37.487967    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:23:37.488059    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:23:37.499234    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:23:37.499316    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:23:37.510058    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:23:37.510136    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:23:37.520702    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:23:37.520789    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:23:37.530873    5271 logs.go:276] 0 containers: []
	W0913 17:23:37.530885    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:23:37.530955    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:23:37.541464    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:23:37.541482    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:23:37.541489    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:23:37.552925    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:23:37.552935    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:23:37.566920    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:23:37.566930    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:23:37.602416    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:23:37.602426    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:23:37.636198    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:23:37.636211    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:23:37.650210    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:23:37.650223    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:23:37.661446    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:23:37.661457    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:23:37.673262    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:23:37.673276    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:23:37.708271    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:23:37.708294    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:23:37.723203    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:23:37.723213    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:23:37.747790    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:23:37.747797    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:23:37.759525    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:23:37.759538    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:23:37.764277    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:23:37.764282    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:23:37.778254    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:23:37.778265    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:23:37.790036    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:23:37.790047    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:23:40.301993    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:23:45.304158    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:23:45.304449    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:23:45.326788    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:23:45.326907    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:23:45.348783    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:23:45.348869    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:23:45.361458    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:23:45.361536    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:23:45.371685    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:23:45.371763    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:23:45.383021    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:23:45.383099    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:23:45.393208    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:23:45.393296    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:23:45.403520    5271 logs.go:276] 0 containers: []
	W0913 17:23:45.403529    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:23:45.403588    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:23:45.413700    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:23:45.413722    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:23:45.413729    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:23:45.425258    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:23:45.425268    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:23:45.458931    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:23:45.458942    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:23:45.463722    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:23:45.463732    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:23:45.477239    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:23:45.477251    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:23:45.488843    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:23:45.488857    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:23:45.504384    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:23:45.504397    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:23:45.515940    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:23:45.515950    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:23:45.550700    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:23:45.550712    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:23:45.565996    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:23:45.566009    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:23:45.586150    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:23:45.586159    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:23:45.610204    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:23:45.610212    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:23:45.625657    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:23:45.625666    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:23:45.637171    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:23:45.637182    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:23:45.648797    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:23:45.648808    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:23:48.162877    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:23:53.107377    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:23:53.107757    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0913 17:23:53.138383    5271 logs.go:276] 1 containers: [44d64bf89981]
	I0913 17:23:53.138502    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0913 17:23:53.157126    5271 logs.go:276] 1 containers: [556b6926a300]
	I0913 17:23:53.157244    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0913 17:23:53.175668    5271 logs.go:276] 4 containers: [3e5fc5c6c3eb 1229dab454a2 2175886fa9aa 26dc6ce50ada]
	I0913 17:23:53.175749    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0913 17:23:53.188610    5271 logs.go:276] 1 containers: [49c316e92ea1]
	I0913 17:23:53.188704    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0913 17:23:53.199618    5271 logs.go:276] 1 containers: [30125d3aa36b]
	I0913 17:23:53.199703    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0913 17:23:53.211236    5271 logs.go:276] 1 containers: [9d0c68aa034e]
	I0913 17:23:53.211321    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0913 17:23:53.222883    5271 logs.go:276] 0 containers: []
	W0913 17:23:53.222895    5271 logs.go:278] No container was found matching "kindnet"
	I0913 17:23:53.222984    5271 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0913 17:23:53.234484    5271 logs.go:276] 1 containers: [c649410ebd27]
	I0913 17:23:53.234502    5271 logs.go:123] Gathering logs for kubelet ...
	I0913 17:23:53.234507    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0913 17:23:53.270774    5271 logs.go:123] Gathering logs for Docker ...
	I0913 17:23:53.270791    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0913 17:23:53.295356    5271 logs.go:123] Gathering logs for kube-controller-manager [9d0c68aa034e] ...
	I0913 17:23:53.295372    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d0c68aa034e"
	I0913 17:23:53.314440    5271 logs.go:123] Gathering logs for storage-provisioner [c649410ebd27] ...
	I0913 17:23:53.314452    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c649410ebd27"
	I0913 17:23:53.327306    5271 logs.go:123] Gathering logs for coredns [3e5fc5c6c3eb] ...
	I0913 17:23:53.327319    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e5fc5c6c3eb"
	I0913 17:23:53.340921    5271 logs.go:123] Gathering logs for coredns [1229dab454a2] ...
	I0913 17:23:53.340932    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1229dab454a2"
	I0913 17:23:53.353959    5271 logs.go:123] Gathering logs for coredns [26dc6ce50ada] ...
	I0913 17:23:53.353970    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26dc6ce50ada"
	I0913 17:23:53.367224    5271 logs.go:123] Gathering logs for kube-scheduler [49c316e92ea1] ...
	I0913 17:23:53.367236    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49c316e92ea1"
	I0913 17:23:53.383431    5271 logs.go:123] Gathering logs for dmesg ...
	I0913 17:23:53.383445    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0913 17:23:53.388241    5271 logs.go:123] Gathering logs for kube-apiserver [44d64bf89981] ...
	I0913 17:23:53.388252    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44d64bf89981"
	I0913 17:23:53.404147    5271 logs.go:123] Gathering logs for etcd [556b6926a300] ...
	I0913 17:23:53.404158    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 556b6926a300"
	I0913 17:23:53.418242    5271 logs.go:123] Gathering logs for container status ...
	I0913 17:23:53.418254    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0913 17:23:53.430830    5271 logs.go:123] Gathering logs for describe nodes ...
	I0913 17:23:53.430843    5271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0913 17:23:53.470103    5271 logs.go:123] Gathering logs for coredns [2175886fa9aa] ...
	I0913 17:23:53.470114    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2175886fa9aa"
	I0913 17:23:53.483350    5271 logs.go:123] Gathering logs for kube-proxy [30125d3aa36b] ...
	I0913 17:23:53.483359    5271 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30125d3aa36b"
	I0913 17:23:55.999440    5271 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0913 17:24:01.001966    5271 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0913 17:24:01.008018    5271 out.go:201] 
	W0913 17:24:01.013229    5271 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0913 17:24:01.013273    5271 out.go:270] * 
	* 
	W0913 17:24:01.015702    5271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:24:01.030196    5271 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-434000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (573.30s)

                                                
                                    
x
+
TestPause/serial/Start (10.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-375000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-375000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.049711375s)

                                                
                                                
-- stdout --
	* [pause-375000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-375000" primary control-plane node in "pause-375000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-375000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-375000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-375000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-375000 -n pause-375000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-375000 -n pause-375000: exit status 7 (66.119125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-375000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-004000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-004000 --driver=qemu2 : exit status 80 (9.825748875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-004000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-004000" primary control-plane node in "NoKubernetes-004000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-004000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-004000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-004000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-004000 -n NoKubernetes-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-004000 -n NoKubernetes-004000: exit status 7 (66.523959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-004000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-004000 --no-kubernetes --driver=qemu2 : exit status 80 (5.245359708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-004000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-004000
	* Restarting existing qemu2 VM for "NoKubernetes-004000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-004000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-004000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-004000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-004000 -n NoKubernetes-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-004000 -n NoKubernetes-004000: exit status 7 (56.725042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-004000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-004000 --no-kubernetes --driver=qemu2 : exit status 80 (5.232870583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-004000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-004000
	* Restarting existing qemu2 VM for "NoKubernetes-004000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-004000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-004000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-004000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-004000 -n NoKubernetes-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-004000 -n NoKubernetes-004000: exit status 7 (58.265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-004000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-004000 --driver=qemu2 : exit status 80 (5.257898875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-004000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-004000
	* Restarting existing qemu2 VM for "NoKubernetes-004000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-004000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-004000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-004000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-004000 -n NoKubernetes-004000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-004000 -n NoKubernetes-004000: exit status 7 (56.836208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-004000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.81052575s)

                                                
                                                
-- stdout --
	* [auto-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-234000" primary control-plane node in "auto-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:22:06.384591    5512 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:22:06.384719    5512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:06.384722    5512 out.go:358] Setting ErrFile to fd 2...
	I0913 17:22:06.384725    5512 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:06.384855    5512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:22:06.385941    5512 out.go:352] Setting JSON to false
	I0913 17:22:06.402129    5512 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4890,"bootTime":1726268436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:22:06.402207    5512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:22:06.408538    5512 out.go:177] * [auto-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:22:06.416178    5512 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:22:06.416268    5512 notify.go:220] Checking for updates...
	I0913 17:22:06.423312    5512 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:22:06.424849    5512 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:22:06.427321    5512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:22:06.430362    5512 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:22:06.435324    5512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:22:06.438685    5512 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:22:06.438750    5512 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:22:06.438792    5512 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:22:06.443168    5512 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:22:06.450355    5512 start.go:297] selected driver: qemu2
	I0913 17:22:06.450361    5512 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:22:06.450368    5512 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:22:06.452520    5512 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:22:06.455324    5512 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:22:06.458449    5512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:22:06.458466    5512 cni.go:84] Creating CNI manager for ""
	I0913 17:22:06.458490    5512 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:22:06.458498    5512 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:22:06.458526    5512 start.go:340] cluster config:
	{Name:auto-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:22:06.461961    5512 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:22:06.468226    5512 out.go:177] * Starting "auto-234000" primary control-plane node in "auto-234000" cluster
	I0913 17:22:06.472389    5512 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:22:06.472405    5512 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:22:06.472415    5512 cache.go:56] Caching tarball of preloaded images
	I0913 17:22:06.472482    5512 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:22:06.472488    5512 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:22:06.472552    5512 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/auto-234000/config.json ...
	I0913 17:22:06.472564    5512 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/auto-234000/config.json: {Name:mkb373c0a007ea1b4404178f7e86af015c8f5256 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:22:06.472876    5512 start.go:360] acquireMachinesLock for auto-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:22:06.472905    5512 start.go:364] duration metric: took 23.833µs to acquireMachinesLock for "auto-234000"
	I0913 17:22:06.472915    5512 start.go:93] Provisioning new machine with config: &{Name:auto-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:22:06.472946    5512 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:22:06.481308    5512 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:22:06.496299    5512 start.go:159] libmachine.API.Create for "auto-234000" (driver="qemu2")
	I0913 17:22:06.496337    5512 client.go:168] LocalClient.Create starting
	I0913 17:22:06.496405    5512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:22:06.496439    5512 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:06.496452    5512 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:06.496486    5512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:22:06.496512    5512 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:06.496520    5512 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:06.496968    5512 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:22:06.659290    5512 main.go:141] libmachine: Creating SSH key...
	I0913 17:22:06.722855    5512 main.go:141] libmachine: Creating Disk image...
	I0913 17:22:06.722862    5512 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:22:06.723068    5512 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2
	I0913 17:22:06.732652    5512 main.go:141] libmachine: STDOUT: 
	I0913 17:22:06.732670    5512 main.go:141] libmachine: STDERR: 
	I0913 17:22:06.732727    5512 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2 +20000M
	I0913 17:22:06.740812    5512 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:22:06.740834    5512 main.go:141] libmachine: STDERR: 
	I0913 17:22:06.740853    5512 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2
	I0913 17:22:06.740862    5512 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:22:06.740874    5512 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:22:06.740903    5512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:01:d6:a0:6f:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2
	I0913 17:22:06.742500    5512 main.go:141] libmachine: STDOUT: 
	I0913 17:22:06.742514    5512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:22:06.742538    5512 client.go:171] duration metric: took 246.196208ms to LocalClient.Create
	I0913 17:22:08.744716    5512 start.go:128] duration metric: took 2.271770292s to createHost
	I0913 17:22:08.744803    5512 start.go:83] releasing machines lock for "auto-234000", held for 2.271922s
	W0913 17:22:08.744853    5512 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:08.759315    5512 out.go:177] * Deleting "auto-234000" in qemu2 ...
	W0913 17:22:08.793251    5512 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:08.793282    5512 start.go:729] Will try again in 5 seconds ...
	I0913 17:22:13.795403    5512 start.go:360] acquireMachinesLock for auto-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:22:13.795916    5512 start.go:364] duration metric: took 427.458µs to acquireMachinesLock for "auto-234000"
	I0913 17:22:13.795980    5512 start.go:93] Provisioning new machine with config: &{Name:auto-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:22:13.796292    5512 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:22:13.806722    5512 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:22:13.857826    5512 start.go:159] libmachine.API.Create for "auto-234000" (driver="qemu2")
	I0913 17:22:13.857877    5512 client.go:168] LocalClient.Create starting
	I0913 17:22:13.858012    5512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:22:13.858100    5512 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:13.858115    5512 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:13.858178    5512 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:22:13.858223    5512 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:13.858238    5512 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:13.858860    5512 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:22:14.029803    5512 main.go:141] libmachine: Creating SSH key...
	I0913 17:22:14.099155    5512 main.go:141] libmachine: Creating Disk image...
	I0913 17:22:14.099163    5512 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:22:14.099355    5512 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2
	I0913 17:22:14.109178    5512 main.go:141] libmachine: STDOUT: 
	I0913 17:22:14.109202    5512 main.go:141] libmachine: STDERR: 
	I0913 17:22:14.109274    5512 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2 +20000M
	I0913 17:22:14.117825    5512 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:22:14.117839    5512 main.go:141] libmachine: STDERR: 
	I0913 17:22:14.117851    5512 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2
	I0913 17:22:14.117855    5512 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:22:14.117875    5512 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:22:14.117900    5512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:b5:7a:87:05:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/auto-234000/disk.qcow2
	I0913 17:22:14.119561    5512 main.go:141] libmachine: STDOUT: 
	I0913 17:22:14.119577    5512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:22:14.119590    5512 client.go:171] duration metric: took 261.712083ms to LocalClient.Create
	I0913 17:22:16.121742    5512 start.go:128] duration metric: took 2.325454542s to createHost
	I0913 17:22:16.121804    5512 start.go:83] releasing machines lock for "auto-234000", held for 2.325898166s
	W0913 17:22:16.122136    5512 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:16.135997    5512 out.go:201] 
	W0913 17:22:16.140168    5512 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:22:16.140215    5512 out.go:270] * 
	* 
	W0913 17:22:16.142421    5512 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:22:16.153057    5512 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.826701709s)

                                                
                                                
-- stdout --
	* [kindnet-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-234000" primary control-plane node in "kindnet-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:22:18.326434    5621 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:22:18.326580    5621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:18.326583    5621 out.go:358] Setting ErrFile to fd 2...
	I0913 17:22:18.326585    5621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:18.326722    5621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:22:18.327835    5621 out.go:352] Setting JSON to false
	I0913 17:22:18.344236    5621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4902,"bootTime":1726268436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:22:18.344307    5621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:22:18.350830    5621 out.go:177] * [kindnet-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:22:18.357658    5621 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:22:18.357694    5621 notify.go:220] Checking for updates...
	I0913 17:22:18.364678    5621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:22:18.367670    5621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:22:18.370671    5621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:22:18.373691    5621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:22:18.376644    5621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:22:18.380058    5621 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:22:18.380136    5621 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:22:18.380186    5621 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:22:18.382526    5621 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:22:18.389664    5621 start.go:297] selected driver: qemu2
	I0913 17:22:18.389669    5621 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:22:18.389684    5621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:22:18.391946    5621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:22:18.393330    5621 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:22:18.396741    5621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:22:18.396756    5621 cni.go:84] Creating CNI manager for "kindnet"
	I0913 17:22:18.396759    5621 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0913 17:22:18.396790    5621 start.go:340] cluster config:
	{Name:kindnet-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:22:18.400214    5621 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:22:18.407625    5621 out.go:177] * Starting "kindnet-234000" primary control-plane node in "kindnet-234000" cluster
	I0913 17:22:18.411666    5621 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:22:18.411681    5621 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:22:18.411694    5621 cache.go:56] Caching tarball of preloaded images
	I0913 17:22:18.411758    5621 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:22:18.411763    5621 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:22:18.411822    5621 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/kindnet-234000/config.json ...
	I0913 17:22:18.411834    5621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/kindnet-234000/config.json: {Name:mk2b88da94a1323b31f190ef79733c21e267017e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:22:18.412046    5621 start.go:360] acquireMachinesLock for kindnet-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:22:18.412076    5621 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "kindnet-234000"
	I0913 17:22:18.412086    5621 start.go:93] Provisioning new machine with config: &{Name:kindnet-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:22:18.412107    5621 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:22:18.419651    5621 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:22:18.434984    5621 start.go:159] libmachine.API.Create for "kindnet-234000" (driver="qemu2")
	I0913 17:22:18.435009    5621 client.go:168] LocalClient.Create starting
	I0913 17:22:18.435076    5621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:22:18.435106    5621 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:18.435116    5621 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:18.435156    5621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:22:18.435178    5621 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:18.435187    5621 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:18.435503    5621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:22:18.595453    5621 main.go:141] libmachine: Creating SSH key...
	I0913 17:22:18.629884    5621 main.go:141] libmachine: Creating Disk image...
	I0913 17:22:18.629889    5621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:22:18.630046    5621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2
	I0913 17:22:18.639179    5621 main.go:141] libmachine: STDOUT: 
	I0913 17:22:18.639200    5621 main.go:141] libmachine: STDERR: 
	I0913 17:22:18.639263    5621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2 +20000M
	I0913 17:22:18.647390    5621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:22:18.647406    5621 main.go:141] libmachine: STDERR: 
	I0913 17:22:18.647425    5621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2
	I0913 17:22:18.647431    5621 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:22:18.647449    5621 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:22:18.647476    5621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:10:fd:dd:e1:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2
	I0913 17:22:18.649069    5621 main.go:141] libmachine: STDOUT: 
	I0913 17:22:18.649082    5621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:22:18.649104    5621 client.go:171] duration metric: took 214.092959ms to LocalClient.Create
	I0913 17:22:20.651286    5621 start.go:128] duration metric: took 2.239187833s to createHost
	I0913 17:22:20.651366    5621 start.go:83] releasing machines lock for "kindnet-234000", held for 2.239316125s
	W0913 17:22:20.651416    5621 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:20.661596    5621 out.go:177] * Deleting "kindnet-234000" in qemu2 ...
	W0913 17:22:20.690815    5621 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:20.690846    5621 start.go:729] Will try again in 5 seconds ...
	I0913 17:22:25.692987    5621 start.go:360] acquireMachinesLock for kindnet-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:22:25.693544    5621 start.go:364] duration metric: took 471.791µs to acquireMachinesLock for "kindnet-234000"
	I0913 17:22:25.693681    5621 start.go:93] Provisioning new machine with config: &{Name:kindnet-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:22:25.693905    5621 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:22:25.701535    5621 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:22:25.741375    5621 start.go:159] libmachine.API.Create for "kindnet-234000" (driver="qemu2")
	I0913 17:22:25.741428    5621 client.go:168] LocalClient.Create starting
	I0913 17:22:25.741552    5621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:22:25.741617    5621 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:25.741633    5621 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:25.741701    5621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:22:25.741746    5621 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:25.741756    5621 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:25.742332    5621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:22:25.909382    5621 main.go:141] libmachine: Creating SSH key...
	I0913 17:22:26.059312    5621 main.go:141] libmachine: Creating Disk image...
	I0913 17:22:26.059325    5621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:22:26.059546    5621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2
	I0913 17:22:26.068824    5621 main.go:141] libmachine: STDOUT: 
	I0913 17:22:26.068842    5621 main.go:141] libmachine: STDERR: 
	I0913 17:22:26.068902    5621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2 +20000M
	I0913 17:22:26.077357    5621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:22:26.077384    5621 main.go:141] libmachine: STDERR: 
	I0913 17:22:26.077399    5621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2
	I0913 17:22:26.077404    5621 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:22:26.077413    5621 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:22:26.077439    5621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:fe:1f:f7:98:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kindnet-234000/disk.qcow2
	I0913 17:22:26.079231    5621 main.go:141] libmachine: STDOUT: 
	I0913 17:22:26.079245    5621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:22:26.079266    5621 client.go:171] duration metric: took 337.838584ms to LocalClient.Create
	I0913 17:22:28.081428    5621 start.go:128] duration metric: took 2.38751625s to createHost
	I0913 17:22:28.081480    5621 start.go:83] releasing machines lock for "kindnet-234000", held for 2.387936791s
	W0913 17:22:28.081651    5621 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:28.098025    5621 out.go:201] 
	W0913 17:22:28.101114    5621 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:22:28.101126    5621 out.go:270] * 
	* 
	W0913 17:22:28.102324    5621 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:22:28.113987    5621 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.008022375s)

                                                
                                                
-- stdout --
	* [calico-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-234000" primary control-plane node in "calico-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:22:30.347953    5737 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:22:30.348106    5737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:30.348109    5737 out.go:358] Setting ErrFile to fd 2...
	I0913 17:22:30.348112    5737 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:30.348254    5737 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:22:30.349333    5737 out.go:352] Setting JSON to false
	I0913 17:22:30.365603    5737 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4914,"bootTime":1726268436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:22:30.365667    5737 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:22:30.373668    5737 out.go:177] * [calico-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:22:30.381436    5737 notify.go:220] Checking for updates...
	I0913 17:22:30.381442    5737 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:22:30.389427    5737 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:22:30.393513    5737 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:22:30.396461    5737 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:22:30.399476    5737 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:22:30.402426    5737 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:22:30.405791    5737 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:22:30.405866    5737 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:22:30.405914    5737 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:22:30.410483    5737 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:22:30.417437    5737 start.go:297] selected driver: qemu2
	I0913 17:22:30.417444    5737 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:22:30.417450    5737 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:22:30.419851    5737 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:22:30.422364    5737 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:22:30.425476    5737 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:22:30.425497    5737 cni.go:84] Creating CNI manager for "calico"
	I0913 17:22:30.425500    5737 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0913 17:22:30.425529    5737 start.go:340] cluster config:
	{Name:calico-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:22:30.429094    5737 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:22:30.434437    5737 out.go:177] * Starting "calico-234000" primary control-plane node in "calico-234000" cluster
	I0913 17:22:30.438414    5737 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:22:30.438426    5737 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:22:30.438434    5737 cache.go:56] Caching tarball of preloaded images
	I0913 17:22:30.438483    5737 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:22:30.438488    5737 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:22:30.438550    5737 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/calico-234000/config.json ...
	I0913 17:22:30.438559    5737 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/calico-234000/config.json: {Name:mk45fe21c2921eaa5bc382f309f459d699a85d6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:22:30.438759    5737 start.go:360] acquireMachinesLock for calico-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:22:30.438793    5737 start.go:364] duration metric: took 29µs to acquireMachinesLock for "calico-234000"
	I0913 17:22:30.438807    5737 start.go:93] Provisioning new machine with config: &{Name:calico-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:22:30.438830    5737 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:22:30.446470    5737 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:22:30.462242    5737 start.go:159] libmachine.API.Create for "calico-234000" (driver="qemu2")
	I0913 17:22:30.462269    5737 client.go:168] LocalClient.Create starting
	I0913 17:22:30.462334    5737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:22:30.462363    5737 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:30.462374    5737 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:30.462409    5737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:22:30.462433    5737 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:30.462439    5737 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:30.462779    5737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:22:30.623632    5737 main.go:141] libmachine: Creating SSH key...
	I0913 17:22:30.860786    5737 main.go:141] libmachine: Creating Disk image...
	I0913 17:22:30.860799    5737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:22:30.861002    5737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2
	I0913 17:22:30.870866    5737 main.go:141] libmachine: STDOUT: 
	I0913 17:22:30.870894    5737 main.go:141] libmachine: STDERR: 
	I0913 17:22:30.870962    5737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2 +20000M
	I0913 17:22:30.879133    5737 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:22:30.879149    5737 main.go:141] libmachine: STDERR: 
	I0913 17:22:30.879171    5737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2
	I0913 17:22:30.879176    5737 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:22:30.879189    5737 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:22:30.879234    5737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:fc:de:e3:cd:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2
	I0913 17:22:30.880950    5737 main.go:141] libmachine: STDOUT: 
	I0913 17:22:30.880965    5737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:22:30.880992    5737 client.go:171] duration metric: took 418.722625ms to LocalClient.Create
	I0913 17:22:32.881421    5737 start.go:128] duration metric: took 2.442615709s to createHost
	I0913 17:22:32.881455    5737 start.go:83] releasing machines lock for "calico-234000", held for 2.442692375s
	W0913 17:22:32.881482    5737 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:32.901579    5737 out.go:177] * Deleting "calico-234000" in qemu2 ...
	W0913 17:22:32.925349    5737 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:32.925362    5737 start.go:729] Will try again in 5 seconds ...
	I0913 17:22:37.927489    5737 start.go:360] acquireMachinesLock for calico-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:22:37.928126    5737 start.go:364] duration metric: took 525.25µs to acquireMachinesLock for "calico-234000"
	I0913 17:22:37.928252    5737 start.go:93] Provisioning new machine with config: &{Name:calico-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:22:37.928576    5737 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:22:37.933372    5737 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:22:37.983925    5737 start.go:159] libmachine.API.Create for "calico-234000" (driver="qemu2")
	I0913 17:22:37.983984    5737 client.go:168] LocalClient.Create starting
	I0913 17:22:37.984107    5737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:22:37.984165    5737 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:37.984184    5737 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:37.984249    5737 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:22:37.984295    5737 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:37.984308    5737 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:37.984962    5737 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:22:38.151165    5737 main.go:141] libmachine: Creating SSH key...
	I0913 17:22:38.269020    5737 main.go:141] libmachine: Creating Disk image...
	I0913 17:22:38.269031    5737 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:22:38.269219    5737 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2
	I0913 17:22:38.278880    5737 main.go:141] libmachine: STDOUT: 
	I0913 17:22:38.278899    5737 main.go:141] libmachine: STDERR: 
	I0913 17:22:38.278965    5737 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2 +20000M
	I0913 17:22:38.287224    5737 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:22:38.287245    5737 main.go:141] libmachine: STDERR: 
	I0913 17:22:38.287257    5737 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2
	I0913 17:22:38.287265    5737 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:22:38.287274    5737 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:22:38.287304    5737 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:37:15:c2:6a:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/calico-234000/disk.qcow2
	I0913 17:22:38.289086    5737 main.go:141] libmachine: STDOUT: 
	I0913 17:22:38.289105    5737 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:22:38.289117    5737 client.go:171] duration metric: took 305.132542ms to LocalClient.Create
	I0913 17:22:40.291301    5737 start.go:128] duration metric: took 2.36269075s to createHost
	I0913 17:22:40.291341    5737 start.go:83] releasing machines lock for "calico-234000", held for 2.363187709s
	W0913 17:22:40.291490    5737 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:40.302342    5737 out.go:201] 
	W0913 17:22:40.306617    5737 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:22:40.306638    5737 out.go:270] * 
	* 
	W0913 17:22:40.307411    5737 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:22:40.318294    5737 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.915143625s)

                                                
                                                
-- stdout --
	* [custom-flannel-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-234000" primary control-plane node in "custom-flannel-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:22:42.763577    5857 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:22:42.763732    5857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:42.763735    5857 out.go:358] Setting ErrFile to fd 2...
	I0913 17:22:42.763738    5857 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:42.763882    5857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:22:42.764927    5857 out.go:352] Setting JSON to false
	I0913 17:22:42.781441    5857 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4926,"bootTime":1726268436,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:22:42.781520    5857 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:22:42.788738    5857 out.go:177] * [custom-flannel-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:22:42.795651    5857 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:22:42.795706    5857 notify.go:220] Checking for updates...
	I0913 17:22:42.802703    5857 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:22:42.805687    5857 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:22:42.808701    5857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:22:42.811701    5857 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:22:42.813290    5857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:22:42.817000    5857 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:22:42.817066    5857 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:22:42.817114    5857 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:22:42.821700    5857 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:22:42.826657    5857 start.go:297] selected driver: qemu2
	I0913 17:22:42.826664    5857 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:22:42.826669    5857 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:22:42.828912    5857 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:22:42.831631    5857 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:22:42.834807    5857 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:22:42.834825    5857 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0913 17:22:42.834833    5857 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0913 17:22:42.834865    5857 start.go:340] cluster config:
	{Name:custom-flannel-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:22:42.838751    5857 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:22:42.843696    5857 out.go:177] * Starting "custom-flannel-234000" primary control-plane node in "custom-flannel-234000" cluster
	I0913 17:22:42.850743    5857 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:22:42.850831    5857 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:22:42.850847    5857 cache.go:56] Caching tarball of preloaded images
	I0913 17:22:42.850959    5857 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:22:42.850966    5857 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:22:42.851032    5857 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/custom-flannel-234000/config.json ...
	I0913 17:22:42.851044    5857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/custom-flannel-234000/config.json: {Name:mkf41abfdbb617cab20ab567cdf399c97f98db90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:22:42.851312    5857 start.go:360] acquireMachinesLock for custom-flannel-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:22:42.851346    5857 start.go:364] duration metric: took 25.209µs to acquireMachinesLock for "custom-flannel-234000"
	I0913 17:22:42.851356    5857 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:22:42.851381    5857 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:22:42.859707    5857 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:22:42.875283    5857 start.go:159] libmachine.API.Create for "custom-flannel-234000" (driver="qemu2")
	I0913 17:22:42.875314    5857 client.go:168] LocalClient.Create starting
	I0913 17:22:42.875380    5857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:22:42.875435    5857 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:42.875446    5857 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:42.875469    5857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:22:42.875491    5857 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:42.875500    5857 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:42.875888    5857 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:22:43.037253    5857 main.go:141] libmachine: Creating SSH key...
	I0913 17:22:43.073976    5857 main.go:141] libmachine: Creating Disk image...
	I0913 17:22:43.073982    5857 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:22:43.074145    5857 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2
	I0913 17:22:43.083366    5857 main.go:141] libmachine: STDOUT: 
	I0913 17:22:43.083388    5857 main.go:141] libmachine: STDERR: 
	I0913 17:22:43.083449    5857 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2 +20000M
	I0913 17:22:43.091342    5857 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:22:43.091356    5857 main.go:141] libmachine: STDERR: 
	I0913 17:22:43.091398    5857 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2
	I0913 17:22:43.091404    5857 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:22:43.091415    5857 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:22:43.091443    5857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:2f:4f:59:21:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2
	I0913 17:22:43.093102    5857 main.go:141] libmachine: STDOUT: 
	I0913 17:22:43.093116    5857 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:22:43.093137    5857 client.go:171] duration metric: took 217.820291ms to LocalClient.Create
	I0913 17:22:45.095339    5857 start.go:128] duration metric: took 2.2439665s to createHost
	I0913 17:22:45.095443    5857 start.go:83] releasing machines lock for "custom-flannel-234000", held for 2.244120416s
	W0913 17:22:45.095492    5857 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:45.110647    5857 out.go:177] * Deleting "custom-flannel-234000" in qemu2 ...
	W0913 17:22:45.136151    5857 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:45.136180    5857 start.go:729] Will try again in 5 seconds ...
	I0913 17:22:50.138327    5857 start.go:360] acquireMachinesLock for custom-flannel-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:22:50.138745    5857 start.go:364] duration metric: took 341.209µs to acquireMachinesLock for "custom-flannel-234000"
	I0913 17:22:50.138844    5857 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:22:50.139106    5857 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:22:50.148339    5857 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:22:50.185283    5857 start.go:159] libmachine.API.Create for "custom-flannel-234000" (driver="qemu2")
	I0913 17:22:50.185330    5857 client.go:168] LocalClient.Create starting
	I0913 17:22:50.185462    5857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:22:50.185522    5857 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:50.185541    5857 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:50.185600    5857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:22:50.185640    5857 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:50.185654    5857 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:50.186112    5857 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:22:50.352228    5857 main.go:141] libmachine: Creating SSH key...
	I0913 17:22:50.575165    5857 main.go:141] libmachine: Creating Disk image...
	I0913 17:22:50.575175    5857 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:22:50.575392    5857 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2
	I0913 17:22:50.584936    5857 main.go:141] libmachine: STDOUT: 
	I0913 17:22:50.584955    5857 main.go:141] libmachine: STDERR: 
	I0913 17:22:50.585018    5857 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2 +20000M
	I0913 17:22:50.592956    5857 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:22:50.592972    5857 main.go:141] libmachine: STDERR: 
	I0913 17:22:50.592984    5857 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2
	I0913 17:22:50.592988    5857 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:22:50.592999    5857 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:22:50.593033    5857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:c3:b7:6b:9d:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/custom-flannel-234000/disk.qcow2
	I0913 17:22:50.594697    5857 main.go:141] libmachine: STDOUT: 
	I0913 17:22:50.594714    5857 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:22:50.594725    5857 client.go:171] duration metric: took 409.394917ms to LocalClient.Create
	I0913 17:22:52.596904    5857 start.go:128] duration metric: took 2.457797709s to createHost
	I0913 17:22:52.597014    5857 start.go:83] releasing machines lock for "custom-flannel-234000", held for 2.458283833s
	W0913 17:22:52.597409    5857 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:52.615243    5857 out.go:201] 
	W0913 17:22:52.619285    5857 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:22:52.619320    5857 out.go:270] * 
	* 
	W0913 17:22:52.621851    5857 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:22:52.636164    5857 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.926401166s)

                                                
                                                
-- stdout --
	* [false-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-234000" primary control-plane node in "false-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:22:55.050555    5980 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:22:55.050702    5980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:55.050705    5980 out.go:358] Setting ErrFile to fd 2...
	I0913 17:22:55.050707    5980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:22:55.050850    5980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:22:55.051917    5980 out.go:352] Setting JSON to false
	I0913 17:22:55.068287    5980 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4939,"bootTime":1726268436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:22:55.068359    5980 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:22:55.074595    5980 out.go:177] * [false-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:22:55.082456    5980 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:22:55.082510    5980 notify.go:220] Checking for updates...
	I0913 17:22:55.089352    5980 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:22:55.092378    5980 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:22:55.095436    5980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:22:55.098322    5980 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:22:55.101395    5980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:22:55.104804    5980 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:22:55.104871    5980 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:22:55.104922    5980 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:22:55.109315    5980 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:22:55.116492    5980 start.go:297] selected driver: qemu2
	I0913 17:22:55.116501    5980 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:22:55.116510    5980 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:22:55.118867    5980 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:22:55.123419    5980 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:22:55.126888    5980 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:22:55.126907    5980 cni.go:84] Creating CNI manager for "false"
	I0913 17:22:55.126941    5980 start.go:340] cluster config:
	{Name:false-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:22:55.130540    5980 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:22:55.136407    5980 out.go:177] * Starting "false-234000" primary control-plane node in "false-234000" cluster
	I0913 17:22:55.140384    5980 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:22:55.140399    5980 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:22:55.140410    5980 cache.go:56] Caching tarball of preloaded images
	I0913 17:22:55.140473    5980 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:22:55.140478    5980 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:22:55.140537    5980 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/false-234000/config.json ...
	I0913 17:22:55.140548    5980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/false-234000/config.json: {Name:mk0f73d8c692b6cf14ea856f778ce2273f205b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:22:55.140954    5980 start.go:360] acquireMachinesLock for false-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:22:55.140983    5980 start.go:364] duration metric: took 24.5µs to acquireMachinesLock for "false-234000"
	I0913 17:22:55.140993    5980 start.go:93] Provisioning new machine with config: &{Name:false-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:22:55.141019    5980 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:22:55.145338    5980 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:22:55.160999    5980 start.go:159] libmachine.API.Create for "false-234000" (driver="qemu2")
	I0913 17:22:55.161039    5980 client.go:168] LocalClient.Create starting
	I0913 17:22:55.161124    5980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:22:55.161161    5980 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:55.161172    5980 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:55.161216    5980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:22:55.161238    5980 main.go:141] libmachine: Decoding PEM data...
	I0913 17:22:55.161247    5980 main.go:141] libmachine: Parsing certificate...
	I0913 17:22:55.161604    5980 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:22:55.322839    5980 main.go:141] libmachine: Creating SSH key...
	I0913 17:22:55.417644    5980 main.go:141] libmachine: Creating Disk image...
	I0913 17:22:55.417653    5980 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:22:55.417828    5980 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2
	I0913 17:22:55.427387    5980 main.go:141] libmachine: STDOUT: 
	I0913 17:22:55.427410    5980 main.go:141] libmachine: STDERR: 
	I0913 17:22:55.427468    5980 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2 +20000M
	I0913 17:22:55.435295    5980 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:22:55.435310    5980 main.go:141] libmachine: STDERR: 
	I0913 17:22:55.435328    5980 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2
	I0913 17:22:55.435336    5980 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:22:55.435345    5980 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:22:55.435379    5980 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:d1:d9:c9:c2:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2
	I0913 17:22:55.436966    5980 main.go:141] libmachine: STDOUT: 
	I0913 17:22:55.436988    5980 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:22:55.437011    5980 client.go:171] duration metric: took 275.966625ms to LocalClient.Create
	I0913 17:22:57.439189    5980 start.go:128] duration metric: took 2.298174792s to createHost
	I0913 17:22:57.439301    5980 start.go:83] releasing machines lock for "false-234000", held for 2.298341625s
	W0913 17:22:57.439357    5980 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:57.456900    5980 out.go:177] * Deleting "false-234000" in qemu2 ...
	W0913 17:22:57.488298    5980 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:22:57.488337    5980 start.go:729] Will try again in 5 seconds ...
	I0913 17:23:02.490562    5980 start.go:360] acquireMachinesLock for false-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:02.491117    5980 start.go:364] duration metric: took 448µs to acquireMachinesLock for "false-234000"
	I0913 17:23:02.491183    5980 start.go:93] Provisioning new machine with config: &{Name:false-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:02.491523    5980 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:02.500168    5980 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:23:02.551493    5980 start.go:159] libmachine.API.Create for "false-234000" (driver="qemu2")
	I0913 17:23:02.551547    5980 client.go:168] LocalClient.Create starting
	I0913 17:23:02.551661    5980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:02.551724    5980 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:02.551737    5980 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:02.551809    5980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:02.551854    5980 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:02.551868    5980 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:02.552428    5980 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:02.720425    5980 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:02.878948    5980 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:02.878956    5980 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:02.879143    5980 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2
	I0913 17:23:02.889044    5980 main.go:141] libmachine: STDOUT: 
	I0913 17:23:02.889067    5980 main.go:141] libmachine: STDERR: 
	I0913 17:23:02.889126    5980 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2 +20000M
	I0913 17:23:02.897190    5980 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:02.897206    5980 main.go:141] libmachine: STDERR: 
	I0913 17:23:02.897223    5980 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2
	I0913 17:23:02.897227    5980 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:02.897237    5980 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:02.897283    5980 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:79:47:08:c3:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/false-234000/disk.qcow2
	I0913 17:23:02.898959    5980 main.go:141] libmachine: STDOUT: 
	I0913 17:23:02.898973    5980 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:02.898990    5980 client.go:171] duration metric: took 347.440417ms to LocalClient.Create
	I0913 17:23:04.901168    5980 start.go:128] duration metric: took 2.409635833s to createHost
	I0913 17:23:04.901280    5980 start.go:83] releasing machines lock for "false-234000", held for 2.410172042s
	W0913 17:23:04.901698    5980 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:04.911366    5980 out.go:201] 
	W0913 17:23:04.920534    5980 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:23:04.920590    5980 out.go:270] * 
	* 
	W0913 17:23:04.923798    5980 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:23:04.934386    5980 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.966135667s)

                                                
                                                
-- stdout --
	* [enable-default-cni-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-234000" primary control-plane node in "enable-default-cni-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:23:07.170492    6089 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:23:07.170627    6089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:07.170635    6089 out.go:358] Setting ErrFile to fd 2...
	I0913 17:23:07.170637    6089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:07.170768    6089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:23:07.171847    6089 out.go:352] Setting JSON to false
	I0913 17:23:07.188672    6089 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4951,"bootTime":1726268436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:23:07.188744    6089 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:23:07.196019    6089 out.go:177] * [enable-default-cni-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:23:07.202819    6089 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:23:07.202874    6089 notify.go:220] Checking for updates...
	I0913 17:23:07.209839    6089 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:23:07.212845    6089 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:23:07.215859    6089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:23:07.218855    6089 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:23:07.221809    6089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:23:07.225179    6089 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:23:07.225241    6089 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:23:07.225288    6089 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:23:07.228721    6089 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:23:07.235817    6089 start.go:297] selected driver: qemu2
	I0913 17:23:07.235823    6089 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:23:07.235828    6089 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:23:07.238017    6089 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:23:07.239396    6089 out.go:177] * Automatically selected the socket_vmnet network
	E0913 17:23:07.241893    6089 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0913 17:23:07.241905    6089 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:23:07.241917    6089 cni.go:84] Creating CNI manager for "bridge"
	I0913 17:23:07.241919    6089 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:23:07.241957    6089 start.go:340] cluster config:
	{Name:enable-default-cni-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:23:07.245486    6089 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:23:07.252809    6089 out.go:177] * Starting "enable-default-cni-234000" primary control-plane node in "enable-default-cni-234000" cluster
	I0913 17:23:07.256824    6089 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:23:07.256837    6089 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:23:07.256855    6089 cache.go:56] Caching tarball of preloaded images
	I0913 17:23:07.256929    6089 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:23:07.256934    6089 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:23:07.256987    6089 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/enable-default-cni-234000/config.json ...
	I0913 17:23:07.256997    6089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/enable-default-cni-234000/config.json: {Name:mk08af3b0a4478f27a3700eca2da080f8dbbf2c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:23:07.257223    6089 start.go:360] acquireMachinesLock for enable-default-cni-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:07.257259    6089 start.go:364] duration metric: took 30µs to acquireMachinesLock for "enable-default-cni-234000"
	I0913 17:23:07.257270    6089 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:07.257303    6089 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:07.265837    6089 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:23:07.281251    6089 start.go:159] libmachine.API.Create for "enable-default-cni-234000" (driver="qemu2")
	I0913 17:23:07.281274    6089 client.go:168] LocalClient.Create starting
	I0913 17:23:07.281342    6089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:07.281374    6089 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:07.281384    6089 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:07.281423    6089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:07.281446    6089 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:07.281455    6089 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:07.281870    6089 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:07.441906    6089 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:07.479780    6089 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:07.479786    6089 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:07.479937    6089 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2
	I0913 17:23:07.489247    6089 main.go:141] libmachine: STDOUT: 
	I0913 17:23:07.489279    6089 main.go:141] libmachine: STDERR: 
	I0913 17:23:07.489333    6089 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2 +20000M
	I0913 17:23:07.497454    6089 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:07.497468    6089 main.go:141] libmachine: STDERR: 
	I0913 17:23:07.497492    6089 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2
	I0913 17:23:07.497497    6089 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:07.497510    6089 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:07.497535    6089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:e3:1c:10:4a:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2
	I0913 17:23:07.499219    6089 main.go:141] libmachine: STDOUT: 
	I0913 17:23:07.499234    6089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:07.499258    6089 client.go:171] duration metric: took 217.981458ms to LocalClient.Create
	I0913 17:23:09.501420    6089 start.go:128] duration metric: took 2.24412925s to createHost
	I0913 17:23:09.501492    6089 start.go:83] releasing machines lock for "enable-default-cni-234000", held for 2.244257292s
	W0913 17:23:09.501534    6089 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:09.512542    6089 out.go:177] * Deleting "enable-default-cni-234000" in qemu2 ...
	W0913 17:23:09.542227    6089 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:09.542254    6089 start.go:729] Will try again in 5 seconds ...
	I0913 17:23:14.544471    6089 start.go:360] acquireMachinesLock for enable-default-cni-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:14.544880    6089 start.go:364] duration metric: took 334.916µs to acquireMachinesLock for "enable-default-cni-234000"
	I0913 17:23:14.544976    6089 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:14.545179    6089 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:14.554493    6089 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:23:14.595521    6089 start.go:159] libmachine.API.Create for "enable-default-cni-234000" (driver="qemu2")
	I0913 17:23:14.595565    6089 client.go:168] LocalClient.Create starting
	I0913 17:23:14.595674    6089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:14.595744    6089 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:14.595759    6089 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:14.595819    6089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:14.595858    6089 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:14.595867    6089 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:14.596355    6089 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:14.763662    6089 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:15.046420    6089 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:15.046432    6089 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:15.046623    6089 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2
	I0913 17:23:15.056271    6089 main.go:141] libmachine: STDOUT: 
	I0913 17:23:15.056294    6089 main.go:141] libmachine: STDERR: 
	I0913 17:23:15.056359    6089 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2 +20000M
	I0913 17:23:15.064467    6089 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:15.064485    6089 main.go:141] libmachine: STDERR: 
	I0913 17:23:15.064497    6089 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2
	I0913 17:23:15.064504    6089 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:15.064524    6089 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:15.064554    6089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:5d:f8:db:09:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/enable-default-cni-234000/disk.qcow2
	I0913 17:23:15.066463    6089 main.go:141] libmachine: STDOUT: 
	I0913 17:23:15.066480    6089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:15.066493    6089 client.go:171] duration metric: took 470.930958ms to LocalClient.Create
	I0913 17:23:17.068729    6089 start.go:128] duration metric: took 2.523447s to createHost
	I0913 17:23:17.068802    6089 start.go:83] releasing machines lock for "enable-default-cni-234000", held for 2.523941458s
	W0913 17:23:17.069091    6089 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:17.078632    6089 out.go:201] 
	W0913 17:23:17.084686    6089 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:23:17.084718    6089 out.go:270] * 
	* 
	W0913 17:23:17.085871    6089 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:23:17.103634    6089 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.748939291s)

                                                
                                                
-- stdout --
	* [flannel-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-234000" primary control-plane node in "flannel-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:23:19.295407    6200 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:23:19.295536    6200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:19.295539    6200 out.go:358] Setting ErrFile to fd 2...
	I0913 17:23:19.295542    6200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:19.295671    6200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:23:19.296757    6200 out.go:352] Setting JSON to false
	I0913 17:23:19.313128    6200 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4963,"bootTime":1726268436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:23:19.313205    6200 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:23:19.317981    6200 out.go:177] * [flannel-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:23:19.326944    6200 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:23:19.326993    6200 notify.go:220] Checking for updates...
	I0913 17:23:19.333893    6200 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:23:19.336895    6200 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:23:19.339917    6200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:23:19.342852    6200 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:23:19.345916    6200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:23:19.347712    6200 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:23:19.347781    6200 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:23:19.347833    6200 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:23:19.351842    6200 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:23:19.358681    6200 start.go:297] selected driver: qemu2
	I0913 17:23:19.358686    6200 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:23:19.358692    6200 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:23:19.360924    6200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:23:19.363879    6200 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:23:19.366969    6200 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:23:19.366987    6200 cni.go:84] Creating CNI manager for "flannel"
	I0913 17:23:19.366990    6200 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0913 17:23:19.367033    6200 start.go:340] cluster config:
	{Name:flannel-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:23:19.370649    6200 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:23:19.377908    6200 out.go:177] * Starting "flannel-234000" primary control-plane node in "flannel-234000" cluster
	I0913 17:23:19.381932    6200 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:23:19.381949    6200 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:23:19.381957    6200 cache.go:56] Caching tarball of preloaded images
	I0913 17:23:19.382028    6200 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:23:19.382035    6200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:23:19.382090    6200 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/flannel-234000/config.json ...
	I0913 17:23:19.382102    6200 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/flannel-234000/config.json: {Name:mkdc357558da61a1fa6b8dcd81aa4da06171e103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:23:19.382346    6200 start.go:360] acquireMachinesLock for flannel-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:19.382381    6200 start.go:364] duration metric: took 29.292µs to acquireMachinesLock for "flannel-234000"
	I0913 17:23:19.382392    6200 start.go:93] Provisioning new machine with config: &{Name:flannel-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:19.382424    6200 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:19.390900    6200 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:23:19.407778    6200 start.go:159] libmachine.API.Create for "flannel-234000" (driver="qemu2")
	I0913 17:23:19.407813    6200 client.go:168] LocalClient.Create starting
	I0913 17:23:19.407881    6200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:19.407913    6200 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:19.407921    6200 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:19.407961    6200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:19.407983    6200 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:19.407989    6200 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:19.408326    6200 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:19.593095    6200 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:19.638669    6200 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:19.638675    6200 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:19.638850    6200 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2
	I0913 17:23:19.648206    6200 main.go:141] libmachine: STDOUT: 
	I0913 17:23:19.648226    6200 main.go:141] libmachine: STDERR: 
	I0913 17:23:19.648289    6200 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2 +20000M
	I0913 17:23:19.656114    6200 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:19.656143    6200 main.go:141] libmachine: STDERR: 
	I0913 17:23:19.656157    6200 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2
	I0913 17:23:19.656162    6200 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:19.656172    6200 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:19.656200    6200 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:9a:83:76:48:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2
	I0913 17:23:19.657824    6200 main.go:141] libmachine: STDOUT: 
	I0913 17:23:19.657847    6200 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:19.657869    6200 client.go:171] duration metric: took 250.054708ms to LocalClient.Create
	I0913 17:23:21.660183    6200 start.go:128] duration metric: took 2.277755041s to createHost
	I0913 17:23:21.660273    6200 start.go:83] releasing machines lock for "flannel-234000", held for 2.277916167s
	W0913 17:23:21.660323    6200 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:21.667828    6200 out.go:177] * Deleting "flannel-234000" in qemu2 ...
	W0913 17:23:21.697117    6200 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:21.697145    6200 start.go:729] Will try again in 5 seconds ...
	I0913 17:23:26.699367    6200 start.go:360] acquireMachinesLock for flannel-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:26.699909    6200 start.go:364] duration metric: took 400.541µs to acquireMachinesLock for "flannel-234000"
	I0913 17:23:26.700035    6200 start.go:93] Provisioning new machine with config: &{Name:flannel-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:26.700315    6200 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:26.710685    6200 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:23:26.744748    6200 start.go:159] libmachine.API.Create for "flannel-234000" (driver="qemu2")
	I0913 17:23:26.744812    6200 client.go:168] LocalClient.Create starting
	I0913 17:23:26.744918    6200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:26.744976    6200 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:26.744988    6200 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:26.745040    6200 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:26.745075    6200 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:26.745085    6200 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:26.745512    6200 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:26.910108    6200 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:26.944452    6200 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:26.944458    6200 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:26.944606    6200 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2
	I0913 17:23:26.953806    6200 main.go:141] libmachine: STDOUT: 
	I0913 17:23:26.953823    6200 main.go:141] libmachine: STDERR: 
	I0913 17:23:26.953885    6200 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2 +20000M
	I0913 17:23:26.961748    6200 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:26.961765    6200 main.go:141] libmachine: STDERR: 
	I0913 17:23:26.961776    6200 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2
	I0913 17:23:26.961782    6200 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:26.961794    6200 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:26.961821    6200 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:cd:cf:72:92:d1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/flannel-234000/disk.qcow2
	I0913 17:23:26.963527    6200 main.go:141] libmachine: STDOUT: 
	I0913 17:23:26.963545    6200 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:26.963558    6200 client.go:171] duration metric: took 218.744833ms to LocalClient.Create
	I0913 17:23:28.965731    6200 start.go:128] duration metric: took 2.26541775s to createHost
	I0913 17:23:28.965829    6200 start.go:83] releasing machines lock for "flannel-234000", held for 2.265922792s
	W0913 17:23:28.966195    6200 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:28.983059    6200 out.go:201] 
	W0913 17:23:28.986002    6200 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:23:28.986029    6200 out.go:270] * 
	* 
	W0913 17:23:28.988740    6200 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:23:29.001947    6200 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.775235709s)

                                                
                                                
-- stdout --
	* [bridge-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-234000" primary control-plane node in "bridge-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:23:31.449536    6317 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:23:31.449678    6317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:31.449681    6317 out.go:358] Setting ErrFile to fd 2...
	I0913 17:23:31.449684    6317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:31.449814    6317 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:23:31.450901    6317 out.go:352] Setting JSON to false
	I0913 17:23:31.467286    6317 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4975,"bootTime":1726268436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:23:31.467363    6317 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:23:31.474113    6317 out.go:177] * [bridge-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:23:31.482883    6317 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:23:31.482927    6317 notify.go:220] Checking for updates...
	I0913 17:23:31.489859    6317 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:23:31.492880    6317 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:23:31.495934    6317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:23:31.498823    6317 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:23:31.501832    6317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:23:31.505213    6317 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:23:31.505278    6317 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:23:31.505325    6317 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:23:31.509846    6317 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:23:31.516879    6317 start.go:297] selected driver: qemu2
	I0913 17:23:31.516886    6317 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:23:31.516893    6317 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:23:31.519207    6317 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:23:31.521838    6317 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:23:31.524914    6317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:23:31.524928    6317 cni.go:84] Creating CNI manager for "bridge"
	I0913 17:23:31.524932    6317 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:23:31.524971    6317 start.go:340] cluster config:
	{Name:bridge-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:23:31.528470    6317 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:23:31.533836    6317 out.go:177] * Starting "bridge-234000" primary control-plane node in "bridge-234000" cluster
	I0913 17:23:31.537900    6317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:23:31.537927    6317 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:23:31.537938    6317 cache.go:56] Caching tarball of preloaded images
	I0913 17:23:31.538080    6317 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:23:31.538085    6317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:23:31.538136    6317 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/bridge-234000/config.json ...
	I0913 17:23:31.538146    6317 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/bridge-234000/config.json: {Name:mkf525045f8cfa2f1a0f9f1f44cce04e337e1900 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:23:31.538348    6317 start.go:360] acquireMachinesLock for bridge-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:31.538378    6317 start.go:364] duration metric: took 25.084µs to acquireMachinesLock for "bridge-234000"
	I0913 17:23:31.538388    6317 start.go:93] Provisioning new machine with config: &{Name:bridge-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:31.538409    6317 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:31.546842    6317 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:23:31.562059    6317 start.go:159] libmachine.API.Create for "bridge-234000" (driver="qemu2")
	I0913 17:23:31.562084    6317 client.go:168] LocalClient.Create starting
	I0913 17:23:31.562149    6317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:31.562180    6317 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:31.562189    6317 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:31.562230    6317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:31.562252    6317 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:31.562259    6317 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:31.562596    6317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:31.721950    6317 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:31.761147    6317 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:31.761152    6317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:31.761306    6317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2
	I0913 17:23:31.770856    6317 main.go:141] libmachine: STDOUT: 
	I0913 17:23:31.770880    6317 main.go:141] libmachine: STDERR: 
	I0913 17:23:31.770941    6317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2 +20000M
	I0913 17:23:31.779215    6317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:31.779234    6317 main.go:141] libmachine: STDERR: 
	I0913 17:23:31.779255    6317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2
	I0913 17:23:31.779261    6317 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:31.779276    6317 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:31.779309    6317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:a8:18:6b:03:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2
	I0913 17:23:31.781097    6317 main.go:141] libmachine: STDOUT: 
	I0913 17:23:31.781112    6317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:31.781134    6317 client.go:171] duration metric: took 219.04725ms to LocalClient.Create
	I0913 17:23:33.783212    6317 start.go:128] duration metric: took 2.24482275s to createHost
	I0913 17:23:33.783272    6317 start.go:83] releasing machines lock for "bridge-234000", held for 2.244921791s
	W0913 17:23:33.783298    6317 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:33.803435    6317 out.go:177] * Deleting "bridge-234000" in qemu2 ...
	W0913 17:23:33.830019    6317 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:33.830038    6317 start.go:729] Will try again in 5 seconds ...
	I0913 17:23:38.832222    6317 start.go:360] acquireMachinesLock for bridge-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:38.832597    6317 start.go:364] duration metric: took 299.041µs to acquireMachinesLock for "bridge-234000"
	I0913 17:23:38.832677    6317 start.go:93] Provisioning new machine with config: &{Name:bridge-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:38.832818    6317 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:38.842303    6317 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:23:38.875889    6317 start.go:159] libmachine.API.Create for "bridge-234000" (driver="qemu2")
	I0913 17:23:38.875932    6317 client.go:168] LocalClient.Create starting
	I0913 17:23:38.876043    6317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:38.876097    6317 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:38.876110    6317 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:38.876158    6317 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:38.876197    6317 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:38.876207    6317 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:38.876638    6317 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:39.041979    6317 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:39.134981    6317 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:39.134990    6317 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:39.135172    6317 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2
	I0913 17:23:39.144835    6317 main.go:141] libmachine: STDOUT: 
	I0913 17:23:39.144862    6317 main.go:141] libmachine: STDERR: 
	I0913 17:23:39.144920    6317 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2 +20000M
	I0913 17:23:39.153027    6317 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:39.153044    6317 main.go:141] libmachine: STDERR: 
	I0913 17:23:39.153063    6317 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2
	I0913 17:23:39.153069    6317 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:39.153076    6317 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:39.153121    6317 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:17:1c:15:82:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/bridge-234000/disk.qcow2
	I0913 17:23:39.154830    6317 main.go:141] libmachine: STDOUT: 
	I0913 17:23:39.154844    6317 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:39.154856    6317 client.go:171] duration metric: took 278.922875ms to LocalClient.Create
	I0913 17:23:41.157022    6317 start.go:128] duration metric: took 2.324208041s to createHost
	I0913 17:23:41.157118    6317 start.go:83] releasing machines lock for "bridge-234000", held for 2.324540625s
	W0913 17:23:41.157496    6317 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:41.167077    6317 out.go:201] 
	W0913 17:23:41.172201    6317 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:23:41.172222    6317 out.go:270] * 
	* 
	W0913 17:23:41.174094    6317 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:23:41.184101    6317 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-234000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.847384041s)

                                                
                                                
-- stdout --
	* [kubenet-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-234000" primary control-plane node in "kubenet-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:23:43.368781    6427 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:23:43.368928    6427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:43.368932    6427 out.go:358] Setting ErrFile to fd 2...
	I0913 17:23:43.368934    6427 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:43.369065    6427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:23:43.370210    6427 out.go:352] Setting JSON to false
	I0913 17:23:43.386567    6427 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4987,"bootTime":1726268436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:23:43.386648    6427 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:23:43.392787    6427 out.go:177] * [kubenet-234000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:23:43.400688    6427 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:23:43.400740    6427 notify.go:220] Checking for updates...
	I0913 17:23:43.408569    6427 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:23:43.411771    6427 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:23:43.414569    6427 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:23:43.417593    6427 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:23:43.420586    6427 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:23:43.423973    6427 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:23:43.424039    6427 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:23:43.424085    6427 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:23:43.428563    6427 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:23:43.434571    6427 start.go:297] selected driver: qemu2
	I0913 17:23:43.434577    6427 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:23:43.434583    6427 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:23:43.436861    6427 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:23:43.439591    6427 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:23:43.442673    6427 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:23:43.442688    6427 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0913 17:23:43.442719    6427 start.go:340] cluster config:
	{Name:kubenet-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:23:43.446042    6427 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:23:43.453600    6427 out.go:177] * Starting "kubenet-234000" primary control-plane node in "kubenet-234000" cluster
	I0913 17:23:43.457642    6427 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:23:43.457654    6427 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:23:43.457663    6427 cache.go:56] Caching tarball of preloaded images
	I0913 17:23:43.457714    6427 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:23:43.457719    6427 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:23:43.457765    6427 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/kubenet-234000/config.json ...
	I0913 17:23:43.457775    6427 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/kubenet-234000/config.json: {Name:mkc09b4f4d6772529e868d77503eb165b26862c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:23:43.457995    6427 start.go:360] acquireMachinesLock for kubenet-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:43.458027    6427 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "kubenet-234000"
	I0913 17:23:43.458037    6427 start.go:93] Provisioning new machine with config: &{Name:kubenet-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:43.458059    6427 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:43.465605    6427 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:23:43.481726    6427 start.go:159] libmachine.API.Create for "kubenet-234000" (driver="qemu2")
	I0913 17:23:43.481781    6427 client.go:168] LocalClient.Create starting
	I0913 17:23:43.481849    6427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:43.481882    6427 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:43.481891    6427 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:43.481932    6427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:43.481958    6427 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:43.481966    6427 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:43.482328    6427 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:43.640652    6427 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:43.753935    6427 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:43.753942    6427 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:43.754123    6427 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2
	I0913 17:23:43.763603    6427 main.go:141] libmachine: STDOUT: 
	I0913 17:23:43.763618    6427 main.go:141] libmachine: STDERR: 
	I0913 17:23:43.763675    6427 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2 +20000M
	I0913 17:23:43.771750    6427 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:43.771776    6427 main.go:141] libmachine: STDERR: 
	I0913 17:23:43.771797    6427 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2
	I0913 17:23:43.771802    6427 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:43.771812    6427 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:43.771846    6427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:ff:b4:5d:63:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2
	I0913 17:23:43.773611    6427 main.go:141] libmachine: STDOUT: 
	I0913 17:23:43.773625    6427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:43.773647    6427 client.go:171] duration metric: took 291.863416ms to LocalClient.Create
	I0913 17:23:45.775706    6427 start.go:128] duration metric: took 2.317670125s to createHost
	I0913 17:23:45.775728    6427 start.go:83] releasing machines lock for "kubenet-234000", held for 2.317730958s
	W0913 17:23:45.775776    6427 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:45.784571    6427 out.go:177] * Deleting "kubenet-234000" in qemu2 ...
	W0913 17:23:45.798256    6427 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:45.798269    6427 start.go:729] Will try again in 5 seconds ...
	I0913 17:23:50.743292    6427 start.go:360] acquireMachinesLock for kubenet-234000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:50.743834    6427 start.go:364] duration metric: took 445.708µs to acquireMachinesLock for "kubenet-234000"
	I0913 17:23:50.743969    6427 start.go:93] Provisioning new machine with config: &{Name:kubenet-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:50.744263    6427 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:50.755203    6427 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0913 17:23:50.806079    6427 start.go:159] libmachine.API.Create for "kubenet-234000" (driver="qemu2")
	I0913 17:23:50.806134    6427 client.go:168] LocalClient.Create starting
	I0913 17:23:50.806243    6427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:50.806314    6427 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:50.806331    6427 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:50.806396    6427 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:50.806441    6427 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:50.806453    6427 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:50.806999    6427 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:50.977555    6427 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:51.069032    6427 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:51.069039    6427 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:51.069220    6427 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2
	I0913 17:23:51.078818    6427 main.go:141] libmachine: STDOUT: 
	I0913 17:23:51.078845    6427 main.go:141] libmachine: STDERR: 
	I0913 17:23:51.078911    6427 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2 +20000M
	I0913 17:23:51.087118    6427 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:51.087140    6427 main.go:141] libmachine: STDERR: 
	I0913 17:23:51.087161    6427 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2
	I0913 17:23:51.087168    6427 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:51.087177    6427 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:51.087199    6427 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:e2:17:b5:38:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/kubenet-234000/disk.qcow2
	I0913 17:23:51.088947    6427 main.go:141] libmachine: STDOUT: 
	I0913 17:23:51.088968    6427 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:51.088981    6427 client.go:171] duration metric: took 282.848125ms to LocalClient.Create
	I0913 17:23:53.091104    6427 start.go:128] duration metric: took 2.346842167s to createHost
	I0913 17:23:53.091175    6427 start.go:83] releasing machines lock for "kubenet-234000", held for 2.34739175s
	W0913 17:23:53.091513    6427 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:53.104074    6427 out.go:201] 
	W0913 17:23:53.107077    6427 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:23:53.107101    6427 out.go:270] * 
	* 
	W0913 17:23:53.108864    6427 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:23:53.122128    6427 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-601000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-601000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.047544375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-601000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-601000" primary control-plane node in "old-k8s-version-601000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-601000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:23:55.326481    6539 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:23:55.326614    6539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:55.326618    6539 out.go:358] Setting ErrFile to fd 2...
	I0913 17:23:55.326620    6539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:23:55.326752    6539 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:23:55.327891    6539 out.go:352] Setting JSON to false
	I0913 17:23:55.344302    6539 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4999,"bootTime":1726268436,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:23:55.344368    6539 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:23:55.350737    6539 out.go:177] * [old-k8s-version-601000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:23:55.358642    6539 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:23:55.358708    6539 notify.go:220] Checking for updates...
	I0913 17:23:55.365585    6539 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:23:55.368633    6539 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:23:55.371611    6539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:23:55.374547    6539 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:23:55.377605    6539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:23:55.382082    6539 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:23:55.382153    6539 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:23:55.382205    6539 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:23:55.386605    6539 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:23:55.393629    6539 start.go:297] selected driver: qemu2
	I0913 17:23:55.393634    6539 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:23:55.393640    6539 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:23:55.395742    6539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:23:55.398592    6539 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:23:55.400153    6539 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:23:55.400179    6539 cni.go:84] Creating CNI manager for ""
	I0913 17:23:55.400210    6539 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 17:23:55.400234    6539 start.go:340] cluster config:
	{Name:old-k8s-version-601000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:23:55.403684    6539 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:23:55.410579    6539 out.go:177] * Starting "old-k8s-version-601000" primary control-plane node in "old-k8s-version-601000" cluster
	I0913 17:23:55.414560    6539 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 17:23:55.414574    6539 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 17:23:55.414583    6539 cache.go:56] Caching tarball of preloaded images
	I0913 17:23:55.414642    6539 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:23:55.414648    6539 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 17:23:55.414710    6539 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/old-k8s-version-601000/config.json ...
	I0913 17:23:55.414721    6539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/old-k8s-version-601000/config.json: {Name:mk4967a28c30f28e9e3a216e05fb29643eeaf5a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:23:55.415088    6539 start.go:360] acquireMachinesLock for old-k8s-version-601000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:23:55.415124    6539 start.go:364] duration metric: took 26.333µs to acquireMachinesLock for "old-k8s-version-601000"
	I0913 17:23:55.415134    6539 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:23:55.415157    6539 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:23:55.422600    6539 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:23:55.438916    6539 start.go:159] libmachine.API.Create for "old-k8s-version-601000" (driver="qemu2")
	I0913 17:23:55.438944    6539 client.go:168] LocalClient.Create starting
	I0913 17:23:55.439013    6539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:23:55.439049    6539 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:55.439057    6539 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:55.439095    6539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:23:55.439118    6539 main.go:141] libmachine: Decoding PEM data...
	I0913 17:23:55.439125    6539 main.go:141] libmachine: Parsing certificate...
	I0913 17:23:55.439454    6539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:23:55.599690    6539 main.go:141] libmachine: Creating SSH key...
	I0913 17:23:55.711253    6539 main.go:141] libmachine: Creating Disk image...
	I0913 17:23:55.711260    6539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:23:55.711473    6539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2
	I0913 17:23:55.721018    6539 main.go:141] libmachine: STDOUT: 
	I0913 17:23:55.721036    6539 main.go:141] libmachine: STDERR: 
	I0913 17:23:55.721092    6539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2 +20000M
	I0913 17:23:55.728862    6539 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:23:55.728881    6539 main.go:141] libmachine: STDERR: 
	I0913 17:23:55.728897    6539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2
	I0913 17:23:55.728902    6539 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:23:55.728913    6539 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:23:55.728952    6539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:b8:bc:7c:9c:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2
	I0913 17:23:55.730529    6539 main.go:141] libmachine: STDOUT: 
	I0913 17:23:55.730546    6539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:23:55.730567    6539 client.go:171] duration metric: took 291.626458ms to LocalClient.Create
	I0913 17:23:57.732996    6539 start.go:128] duration metric: took 2.317841209s to createHost
	I0913 17:23:57.733097    6539 start.go:83] releasing machines lock for "old-k8s-version-601000", held for 2.318036958s
	W0913 17:23:57.733190    6539 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:57.750295    6539 out.go:177] * Deleting "old-k8s-version-601000" in qemu2 ...
	W0913 17:23:57.784715    6539 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:23:57.784759    6539 start.go:729] Will try again in 5 seconds ...
	I0913 17:24:02.785627    6539 start.go:360] acquireMachinesLock for old-k8s-version-601000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:02.785922    6539 start.go:364] duration metric: took 230.5µs to acquireMachinesLock for "old-k8s-version-601000"
	I0913 17:24:02.785990    6539 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:24:02.786122    6539 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:24:02.796568    6539 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:24:02.826260    6539 start.go:159] libmachine.API.Create for "old-k8s-version-601000" (driver="qemu2")
	I0913 17:24:02.826312    6539 client.go:168] LocalClient.Create starting
	I0913 17:24:02.826409    6539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:24:02.826462    6539 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:02.826477    6539 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:02.826540    6539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:24:02.826573    6539 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:02.826583    6539 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:02.827041    6539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:24:02.990628    6539 main.go:141] libmachine: Creating SSH key...
	I0913 17:24:03.280278    6539 main.go:141] libmachine: Creating Disk image...
	I0913 17:24:03.280288    6539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:24:03.280488    6539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2
	I0913 17:24:03.290550    6539 main.go:141] libmachine: STDOUT: 
	I0913 17:24:03.290575    6539 main.go:141] libmachine: STDERR: 
	I0913 17:24:03.290643    6539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2 +20000M
	I0913 17:24:03.298869    6539 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:24:03.298896    6539 main.go:141] libmachine: STDERR: 
	I0913 17:24:03.298909    6539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2
	I0913 17:24:03.298915    6539 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:24:03.298924    6539 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:03.298960    6539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:e0:de:6c:67:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2
	I0913 17:24:03.300612    6539 main.go:141] libmachine: STDOUT: 
	I0913 17:24:03.300626    6539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:03.300639    6539 client.go:171] duration metric: took 474.333375ms to LocalClient.Create
	I0913 17:24:05.302768    6539 start.go:128] duration metric: took 2.516690834s to createHost
	I0913 17:24:05.302847    6539 start.go:83] releasing machines lock for "old-k8s-version-601000", held for 2.51699175s
	W0913 17:24:05.303214    6539 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-601000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:05.313778    6539 out.go:201] 
	W0913 17:24:05.320852    6539 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:05.320903    6539 out.go:270] * 
	* 
	W0913 17:24:05.322441    6539 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:24:05.331793    6539 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-601000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (59.990291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-601000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-601000 create -f testdata/busybox.yaml: exit status 1 (29.920667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-601000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (30.216458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (29.388708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-601000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-601000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-601000 describe deploy/metrics-server -n kube-system: exit status 1 (27.537583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-601000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (30.067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-601000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-601000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.196870541s)

                                                
                                                
-- stdout --
	* [old-k8s-version-601000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-601000" primary control-plane node in "old-k8s-version-601000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-601000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:09.424925    6593 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:09.425076    6593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:09.425083    6593 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:09.425085    6593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:09.425235    6593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:09.426329    6593 out.go:352] Setting JSON to false
	I0913 17:24:09.443854    6593 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5013,"bootTime":1726268436,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:24:09.443935    6593 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:24:09.449262    6593 out.go:177] * [old-k8s-version-601000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:24:09.457315    6593 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:24:09.457343    6593 notify.go:220] Checking for updates...
	I0913 17:24:09.463299    6593 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:24:09.466189    6593 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:24:09.469239    6593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:24:09.472268    6593 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:24:09.475198    6593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:24:09.478611    6593 config.go:182] Loaded profile config "old-k8s-version-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0913 17:24:09.482246    6593 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0913 17:24:09.483828    6593 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:24:09.490266    6593 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:24:09.498210    6593 start.go:297] selected driver: qemu2
	I0913 17:24:09.498216    6593 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:09.498272    6593 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:24:09.500870    6593 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:24:09.500897    6593 cni.go:84] Creating CNI manager for ""
	I0913 17:24:09.500918    6593 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 17:24:09.500942    6593 start.go:340] cluster config:
	{Name:old-k8s-version-601000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:09.504736    6593 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:09.513311    6593 out.go:177] * Starting "old-k8s-version-601000" primary control-plane node in "old-k8s-version-601000" cluster
	I0913 17:24:09.517309    6593 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 17:24:09.517328    6593 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 17:24:09.517339    6593 cache.go:56] Caching tarball of preloaded images
	I0913 17:24:09.517411    6593 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:24:09.517417    6593 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 17:24:09.517495    6593 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/old-k8s-version-601000/config.json ...
	I0913 17:24:09.518052    6593 start.go:360] acquireMachinesLock for old-k8s-version-601000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:09.518081    6593 start.go:364] duration metric: took 22.875µs to acquireMachinesLock for "old-k8s-version-601000"
	I0913 17:24:09.518089    6593 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:24:09.518095    6593 fix.go:54] fixHost starting: 
	I0913 17:24:09.518214    6593 fix.go:112] recreateIfNeeded on old-k8s-version-601000: state=Stopped err=<nil>
	W0913 17:24:09.518223    6593 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:24:09.521245    6593 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-601000" ...
	I0913 17:24:09.529092    6593 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:09.529154    6593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:e0:de:6c:67:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2
	I0913 17:24:09.531094    6593 main.go:141] libmachine: STDOUT: 
	I0913 17:24:09.531111    6593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:09.531140    6593 fix.go:56] duration metric: took 13.044666ms for fixHost
	I0913 17:24:09.531145    6593 start.go:83] releasing machines lock for "old-k8s-version-601000", held for 13.060167ms
	W0913 17:24:09.531150    6593 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:09.531182    6593 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:09.531186    6593 start.go:729] Will try again in 5 seconds ...
	I0913 17:24:14.532944    6593 start.go:360] acquireMachinesLock for old-k8s-version-601000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:14.533368    6593 start.go:364] duration metric: took 304.5µs to acquireMachinesLock for "old-k8s-version-601000"
	I0913 17:24:14.533481    6593 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:24:14.533501    6593 fix.go:54] fixHost starting: 
	I0913 17:24:14.534267    6593 fix.go:112] recreateIfNeeded on old-k8s-version-601000: state=Stopped err=<nil>
	W0913 17:24:14.534294    6593 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:24:14.542134    6593 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-601000" ...
	I0913 17:24:14.545086    6593 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:14.545443    6593 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:e0:de:6c:67:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/old-k8s-version-601000/disk.qcow2
	I0913 17:24:14.555335    6593 main.go:141] libmachine: STDOUT: 
	I0913 17:24:14.555410    6593 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:14.555490    6593 fix.go:56] duration metric: took 21.98875ms for fixHost
	I0913 17:24:14.555515    6593 start.go:83] releasing machines lock for "old-k8s-version-601000", held for 22.123583ms
	W0913 17:24:14.555732    6593 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-601000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:14.563977    6593 out.go:201] 
	W0913 17:24:14.568180    6593 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:14.568203    6593 out.go:270] * 
	* 
	W0913 17:24:14.570556    6593 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:24:14.578132    6593 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-601000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (68.33ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-601000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (32.896209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-601000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.090917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-601000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-601000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (31.49075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-601000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (30.331208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-601000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-601000 --alsologtostderr -v=1: exit status 83 (39.418292ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-601000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-601000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:14.853580    6613 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:14.854589    6613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:14.854596    6613 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:14.854598    6613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:14.854810    6613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:14.855037    6613 out.go:352] Setting JSON to false
	I0913 17:24:14.855043    6613 mustload.go:65] Loading cluster: old-k8s-version-601000
	I0913 17:24:14.855264    6613 config.go:182] Loaded profile config "old-k8s-version-601000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0913 17:24:14.859087    6613 out.go:177] * The control-plane node old-k8s-version-601000 host is not running: state=Stopped
	I0913 17:24:14.860273    6613 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-601000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-601000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (29.482416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (29.215958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-601000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-098000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-098000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.84250875s)

                                                
                                                
-- stdout --
	* [no-preload-098000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-098000" primary control-plane node in "no-preload-098000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-098000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:15.185390    6630 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:15.185712    6630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:15.185717    6630 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:15.185720    6630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:15.185929    6630 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:15.187315    6630 out.go:352] Setting JSON to false
	I0913 17:24:15.203984    6630 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5019,"bootTime":1726268436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:24:15.204053    6630 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:24:15.209015    6630 out.go:177] * [no-preload-098000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:24:15.216085    6630 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:24:15.216198    6630 notify.go:220] Checking for updates...
	I0913 17:24:15.223035    6630 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:24:15.225998    6630 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:24:15.229093    6630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:24:15.232048    6630 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:24:15.235014    6630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:24:15.238335    6630 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:15.238401    6630 config.go:182] Loaded profile config "stopped-upgrade-434000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0913 17:24:15.238477    6630 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:24:15.242042    6630 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:24:15.249015    6630 start.go:297] selected driver: qemu2
	I0913 17:24:15.249022    6630 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:24:15.249029    6630 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:24:15.251404    6630 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:24:15.254043    6630 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:24:15.256991    6630 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:24:15.257007    6630 cni.go:84] Creating CNI manager for ""
	I0913 17:24:15.257027    6630 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:24:15.257032    6630 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:24:15.257058    6630 start.go:340] cluster config:
	{Name:no-preload-098000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-098000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:15.260883    6630 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:15.269006    6630 out.go:177] * Starting "no-preload-098000" primary control-plane node in "no-preload-098000" cluster
	I0913 17:24:15.273028    6630 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:24:15.273096    6630 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/no-preload-098000/config.json ...
	I0913 17:24:15.273116    6630 cache.go:107] acquiring lock: {Name:mkcefae73ae7b323d0a2cb91a0a61e7dadc9469f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:15.273125    6630 cache.go:107] acquiring lock: {Name:mk3743d7e82a345c8c2b36fc4f28251713176251 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:15.273130    6630 cache.go:107] acquiring lock: {Name:mke1353d1e41993394f791662df3a50c91008dc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:15.273132    6630 cache.go:107] acquiring lock: {Name:mkda63fb134f8db15e8b3d0e649ee1b1b6e165bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:15.273124    6630 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/no-preload-098000/config.json: {Name:mk7692f5bcc26e09d9b906335359fee75be0f4b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:24:15.273175    6630 cache.go:115] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0913 17:24:15.273181    6630 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 69.625µs
	I0913 17:24:15.273187    6630 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0913 17:24:15.273193    6630 cache.go:107] acquiring lock: {Name:mk9b573f9959fc39093f8847d121f3aaa198bf09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:15.273117    6630 cache.go:107] acquiring lock: {Name:mkf27ea50d7b5be11c6289b8994ddf478ec5a54b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:15.273250    6630 cache.go:107] acquiring lock: {Name:mkc207a36405984d0bd01bfd42c324bf3230116b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:15.273254    6630 cache.go:107] acquiring lock: {Name:mk21eec1fc41254f5ca45cb73d72fe21416563c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:15.273453    6630 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0913 17:24:15.273457    6630 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 17:24:15.273461    6630 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 17:24:15.273454    6630 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0913 17:24:15.273521    6630 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 17:24:15.273580    6630 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 17:24:15.273722    6630 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 17:24:15.273724    6630 start.go:360] acquireMachinesLock for no-preload-098000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:15.273769    6630 start.go:364] duration metric: took 34.75µs to acquireMachinesLock for "no-preload-098000"
	I0913 17:24:15.273780    6630 start.go:93] Provisioning new machine with config: &{Name:no-preload-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-098000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:24:15.273824    6630 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:24:15.282037    6630 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:24:15.286188    6630 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0913 17:24:15.286224    6630 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0913 17:24:15.286267    6630 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0913 17:24:15.286339    6630 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0913 17:24:15.286415    6630 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0913 17:24:15.286814    6630 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0913 17:24:15.288421    6630 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0913 17:24:15.298767    6630 start.go:159] libmachine.API.Create for "no-preload-098000" (driver="qemu2")
	I0913 17:24:15.298799    6630 client.go:168] LocalClient.Create starting
	I0913 17:24:15.298874    6630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:24:15.298903    6630 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:15.298910    6630 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:15.298947    6630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:24:15.298976    6630 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:15.298983    6630 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:15.299305    6630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:24:15.466924    6630 main.go:141] libmachine: Creating SSH key...
	I0913 17:24:15.526682    6630 main.go:141] libmachine: Creating Disk image...
	I0913 17:24:15.526707    6630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:24:15.526890    6630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2
	I0913 17:24:15.537112    6630 main.go:141] libmachine: STDOUT: 
	I0913 17:24:15.537133    6630 main.go:141] libmachine: STDERR: 
	I0913 17:24:15.537220    6630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2 +20000M
	I0913 17:24:15.546142    6630 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:24:15.546160    6630 main.go:141] libmachine: STDERR: 
	I0913 17:24:15.546188    6630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2
	I0913 17:24:15.546194    6630 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:24:15.546211    6630 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:15.546244    6630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:24:da:16:7f:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2
	I0913 17:24:15.548096    6630 main.go:141] libmachine: STDOUT: 
	I0913 17:24:15.548111    6630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:15.548132    6630 client.go:171] duration metric: took 249.329542ms to LocalClient.Create
	I0913 17:24:15.676791    6630 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0913 17:24:15.705359    6630 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0913 17:24:15.715489    6630 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I0913 17:24:15.727674    6630 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I0913 17:24:15.762576    6630 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I0913 17:24:15.784123    6630 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I0913 17:24:15.825261    6630 cache.go:162] opening:  /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0913 17:24:15.922957    6630 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0913 17:24:15.922981    6630 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 649.867625ms
	I0913 17:24:15.922998    6630 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0913 17:24:17.548143    6630 start.go:128] duration metric: took 2.27438275s to createHost
	I0913 17:24:17.548153    6630 start.go:83] releasing machines lock for "no-preload-098000", held for 2.274450125s
	W0913 17:24:17.548165    6630 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:17.560406    6630 out.go:177] * Deleting "no-preload-098000" in qemu2 ...
	W0913 17:24:17.581878    6630 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:17.581888    6630 start.go:729] Will try again in 5 seconds ...
	I0913 17:24:19.284500    6630 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0913 17:24:19.284545    6630 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.011558666s
	I0913 17:24:19.284557    6630 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0913 17:24:19.864578    6630 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0913 17:24:19.864623    6630 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.591511333s
	I0913 17:24:19.864643    6630 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0913 17:24:19.966364    6630 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0913 17:24:19.966390    6630 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.693406542s
	I0913 17:24:19.966414    6630 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0913 17:24:20.369128    6630 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0913 17:24:20.369153    6630 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 5.09607225s
	I0913 17:24:20.369166    6630 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0913 17:24:20.791055    6630 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0913 17:24:20.791105    6630 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 5.518165375s
	I0913 17:24:20.791122    6630 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0913 17:24:22.581949    6630 start.go:360] acquireMachinesLock for no-preload-098000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:22.582522    6630 start.go:364] duration metric: took 488.041µs to acquireMachinesLock for "no-preload-098000"
	I0913 17:24:22.582657    6630 start.go:93] Provisioning new machine with config: &{Name:no-preload-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-098000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:24:22.582966    6630 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:24:22.593634    6630 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:24:22.644989    6630 start.go:159] libmachine.API.Create for "no-preload-098000" (driver="qemu2")
	I0913 17:24:22.645046    6630 client.go:168] LocalClient.Create starting
	I0913 17:24:22.645181    6630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:24:22.645263    6630 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:22.645288    6630 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:22.645359    6630 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:24:22.645407    6630 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:22.645427    6630 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:22.645957    6630 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:24:22.817204    6630 main.go:141] libmachine: Creating SSH key...
	I0913 17:24:22.937650    6630 main.go:141] libmachine: Creating Disk image...
	I0913 17:24:22.937660    6630 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:24:22.937834    6630 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2
	I0913 17:24:22.948026    6630 main.go:141] libmachine: STDOUT: 
	I0913 17:24:22.948056    6630 main.go:141] libmachine: STDERR: 
	I0913 17:24:22.948142    6630 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2 +20000M
	I0913 17:24:22.956609    6630 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:24:22.956625    6630 main.go:141] libmachine: STDERR: 
	I0913 17:24:22.956640    6630 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2
	I0913 17:24:22.956645    6630 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:24:22.956673    6630 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:22.956742    6630 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:73:dd:e0:c9:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2
	I0913 17:24:22.958596    6630 main.go:141] libmachine: STDOUT: 
	I0913 17:24:22.958615    6630 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:22.958629    6630 client.go:171] duration metric: took 313.583542ms to LocalClient.Create
	I0913 17:24:23.729974    6630 cache.go:157] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0913 17:24:23.730024    6630 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 8.457096375s
	I0913 17:24:23.730046    6630 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0913 17:24:23.730116    6630 cache.go:87] Successfully saved all images to host disk.
	I0913 17:24:24.960847    6630 start.go:128] duration metric: took 2.377913625s to createHost
	I0913 17:24:24.960930    6630 start.go:83] releasing machines lock for "no-preload-098000", held for 2.378457125s
	W0913 17:24:24.961281    6630 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-098000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-098000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:24.974814    6630 out.go:201] 
	W0913 17:24:24.978870    6630 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:24.978905    6630 out.go:270] * 
	* 
	W0913 17:24:24.980391    6630 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:24:24.989736    6630 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-098000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (43.59025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-098000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-098000 create -f testdata/busybox.yaml: exit status 1 (27.691834ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-098000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-098000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (29.5965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (30.085959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-098000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-098000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-098000 describe deploy/metrics-server -n kube-system: exit status 1 (27.760959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-098000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-098000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (30.490833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-098000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-098000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.180981875s)

                                                
                                                
-- stdout --
	* [no-preload-098000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-098000" primary control-plane node in "no-preload-098000" cluster
	* Restarting existing qemu2 VM for "no-preload-098000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-098000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:28.712247    6724 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:28.712404    6724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:28.712408    6724 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:28.712410    6724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:28.712557    6724 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:28.713617    6724 out.go:352] Setting JSON to false
	I0913 17:24:28.730174    6724 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5032,"bootTime":1726268436,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:24:28.730243    6724 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:24:28.734775    6724 out.go:177] * [no-preload-098000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:24:28.742833    6724 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:24:28.742898    6724 notify.go:220] Checking for updates...
	I0913 17:24:28.749749    6724 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:24:28.752766    6724 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:24:28.755793    6724 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:24:28.756930    6724 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:24:28.759767    6724 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:24:28.763028    6724 config.go:182] Loaded profile config "no-preload-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:28.763287    6724 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:24:28.767555    6724 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:24:28.774802    6724 start.go:297] selected driver: qemu2
	I0913 17:24:28.774807    6724 start.go:901] validating driver "qemu2" against &{Name:no-preload-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-098000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:28.774855    6724 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:24:28.777113    6724 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:24:28.777136    6724 cni.go:84] Creating CNI manager for ""
	I0913 17:24:28.777154    6724 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:24:28.777178    6724 start.go:340] cluster config:
	{Name:no-preload-098000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-098000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:28.780490    6724 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:28.787777    6724 out.go:177] * Starting "no-preload-098000" primary control-plane node in "no-preload-098000" cluster
	I0913 17:24:28.791756    6724 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:24:28.791824    6724 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/no-preload-098000/config.json ...
	I0913 17:24:28.791874    6724 cache.go:107] acquiring lock: {Name:mkcefae73ae7b323d0a2cb91a0a61e7dadc9469f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:28.791880    6724 cache.go:107] acquiring lock: {Name:mk3743d7e82a345c8c2b36fc4f28251713176251 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:28.791940    6724 cache.go:115] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0913 17:24:28.791947    6724 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 80.667µs
	I0913 17:24:28.791957    6724 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0913 17:24:28.791962    6724 cache.go:107] acquiring lock: {Name:mk9b573f9959fc39093f8847d121f3aaa198bf09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:28.791885    6724 cache.go:107] acquiring lock: {Name:mke1353d1e41993394f791662df3a50c91008dc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:28.791981    6724 cache.go:107] acquiring lock: {Name:mk21eec1fc41254f5ca45cb73d72fe21416563c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:28.791992    6724 cache.go:107] acquiring lock: {Name:mkda63fb134f8db15e8b3d0e649ee1b1b6e165bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:28.791998    6724 cache.go:115] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0913 17:24:28.792001    6724 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 39.625µs
	I0913 17:24:28.792004    6724 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0913 17:24:28.791958    6724 cache.go:115] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0913 17:24:28.792030    6724 cache.go:115] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0913 17:24:28.792039    6724 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 58.334µs
	I0913 17:24:28.792035    6724 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 147.458µs
	I0913 17:24:28.792045    6724 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0913 17:24:28.792047    6724 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0913 17:24:28.792057    6724 cache.go:107] acquiring lock: {Name:mkc207a36405984d0bd01bfd42c324bf3230116b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:28.792063    6724 cache.go:115] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0913 17:24:28.792068    6724 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 120.125µs
	I0913 17:24:28.792073    6724 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0913 17:24:28.792105    6724 cache.go:107] acquiring lock: {Name:mkf27ea50d7b5be11c6289b8994ddf478ec5a54b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:28.792112    6724 cache.go:115] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0913 17:24:28.792118    6724 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 82.291µs
	I0913 17:24:28.792125    6724 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0913 17:24:28.792149    6724 cache.go:115] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0913 17:24:28.792154    6724 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 66.125µs
	I0913 17:24:28.792161    6724 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0913 17:24:28.792150    6724 cache.go:115] /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0913 17:24:28.792165    6724 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 297.5µs
	I0913 17:24:28.792169    6724 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0913 17:24:28.792173    6724 cache.go:87] Successfully saved all images to host disk.
	I0913 17:24:28.792270    6724 start.go:360] acquireMachinesLock for no-preload-098000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:28.792298    6724 start.go:364] duration metric: took 22.958µs to acquireMachinesLock for "no-preload-098000"
	I0913 17:24:28.792306    6724 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:24:28.792310    6724 fix.go:54] fixHost starting: 
	I0913 17:24:28.792422    6724 fix.go:112] recreateIfNeeded on no-preload-098000: state=Stopped err=<nil>
	W0913 17:24:28.792432    6724 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:24:28.800817    6724 out.go:177] * Restarting existing qemu2 VM for "no-preload-098000" ...
	I0913 17:24:28.804718    6724 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:28.804753    6724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:73:dd:e0:c9:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2
	I0913 17:24:28.806687    6724 main.go:141] libmachine: STDOUT: 
	I0913 17:24:28.806707    6724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:28.806735    6724 fix.go:56] duration metric: took 14.423708ms for fixHost
	I0913 17:24:28.806740    6724 start.go:83] releasing machines lock for "no-preload-098000", held for 14.438584ms
	W0913 17:24:28.806746    6724 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:28.806774    6724 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:28.806778    6724 start.go:729] Will try again in 5 seconds ...
	I0913 17:24:33.808788    6724 start.go:360] acquireMachinesLock for no-preload-098000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:33.809268    6724 start.go:364] duration metric: took 387.667µs to acquireMachinesLock for "no-preload-098000"
	I0913 17:24:33.809403    6724 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:24:33.809417    6724 fix.go:54] fixHost starting: 
	I0913 17:24:33.809927    6724 fix.go:112] recreateIfNeeded on no-preload-098000: state=Stopped err=<nil>
	W0913 17:24:33.809941    6724 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:24:33.812923    6724 out.go:177] * Restarting existing qemu2 VM for "no-preload-098000" ...
	I0913 17:24:33.820752    6724 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:33.820920    6724 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:73:dd:e0:c9:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/no-preload-098000/disk.qcow2
	I0913 17:24:33.828998    6724 main.go:141] libmachine: STDOUT: 
	I0913 17:24:33.829072    6724 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:33.829140    6724 fix.go:56] duration metric: took 19.717375ms for fixHost
	I0913 17:24:33.829161    6724 start.go:83] releasing machines lock for "no-preload-098000", held for 19.862125ms
	W0913 17:24:33.829348    6724 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-098000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-098000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:33.837849    6724 out.go:201] 
	W0913 17:24:33.841808    6724 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:33.841827    6724 out.go:270] * 
	* 
	W0913 17:24:33.843421    6724 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:24:33.855785    6724 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-098000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (62.658417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-098000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (32.412583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-098000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-098000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-098000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.159791ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-098000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-098000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (33.315792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-098000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (41.913375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.953450292s)

                                                
                                                
-- stdout --
	* [embed-certs-185000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-185000" primary control-plane node in "embed-certs-185000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-185000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:34.142185    6746 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:34.142322    6746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:34.142325    6746 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:34.142327    6746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:34.142466    6746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:34.143562    6746 out.go:352] Setting JSON to false
	I0913 17:24:34.160752    6746 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5038,"bootTime":1726268436,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:24:34.160826    6746 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:24:34.164886    6746 out.go:177] * [embed-certs-185000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:24:34.171905    6746 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:24:34.172030    6746 notify.go:220] Checking for updates...
	I0913 17:24:34.177821    6746 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:24:34.180871    6746 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:24:34.183837    6746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:24:34.186889    6746 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:24:34.193207    6746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:24:34.197206    6746 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:34.197268    6746 config.go:182] Loaded profile config "no-preload-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:34.197313    6746 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:24:34.200812    6746 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:24:34.207931    6746 start.go:297] selected driver: qemu2
	I0913 17:24:34.207939    6746 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:24:34.207945    6746 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:24:34.210508    6746 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:24:34.213847    6746 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:24:34.216955    6746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:24:34.216973    6746 cni.go:84] Creating CNI manager for ""
	I0913 17:24:34.216995    6746 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:24:34.217006    6746 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:24:34.217037    6746 start.go:340] cluster config:
	{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:34.220561    6746 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:34.229861    6746 out.go:177] * Starting "embed-certs-185000" primary control-plane node in "embed-certs-185000" cluster
	I0913 17:24:34.233860    6746 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:24:34.233881    6746 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:24:34.233891    6746 cache.go:56] Caching tarball of preloaded images
	I0913 17:24:34.233994    6746 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:24:34.234002    6746 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:24:34.234054    6746 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/embed-certs-185000/config.json ...
	I0913 17:24:34.234066    6746 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/embed-certs-185000/config.json: {Name:mkb926f2a845182c5b3881a15d046eda1cca09f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:24:34.234357    6746 start.go:360] acquireMachinesLock for embed-certs-185000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:34.234391    6746 start.go:364] duration metric: took 27.459µs to acquireMachinesLock for "embed-certs-185000"
	I0913 17:24:34.234402    6746 start.go:93] Provisioning new machine with config: &{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:24:34.234439    6746 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:24:34.237887    6746 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:24:34.254497    6746 start.go:159] libmachine.API.Create for "embed-certs-185000" (driver="qemu2")
	I0913 17:24:34.254527    6746 client.go:168] LocalClient.Create starting
	I0913 17:24:34.254592    6746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:24:34.254622    6746 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:34.254644    6746 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:34.254687    6746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:24:34.254710    6746 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:34.254720    6746 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:34.255095    6746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:24:34.505506    6746 main.go:141] libmachine: Creating SSH key...
	I0913 17:24:34.568854    6746 main.go:141] libmachine: Creating Disk image...
	I0913 17:24:34.568863    6746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:24:34.569054    6746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2
	I0913 17:24:34.578647    6746 main.go:141] libmachine: STDOUT: 
	I0913 17:24:34.578670    6746 main.go:141] libmachine: STDERR: 
	I0913 17:24:34.578724    6746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2 +20000M
	I0913 17:24:34.587286    6746 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:24:34.587305    6746 main.go:141] libmachine: STDERR: 
	I0913 17:24:34.587327    6746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2
	I0913 17:24:34.587333    6746 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:24:34.587347    6746 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:34.587382    6746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:0e:7a:64:46:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2
	I0913 17:24:34.589224    6746 main.go:141] libmachine: STDOUT: 
	I0913 17:24:34.589238    6746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:34.589259    6746 client.go:171] duration metric: took 334.7365ms to LocalClient.Create
	I0913 17:24:36.591494    6746 start.go:128] duration metric: took 2.35709175s to createHost
	I0913 17:24:36.591613    6746 start.go:83] releasing machines lock for "embed-certs-185000", held for 2.357287208s
	W0913 17:24:36.591661    6746 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:36.609961    6746 out.go:177] * Deleting "embed-certs-185000" in qemu2 ...
	W0913 17:24:36.634354    6746 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:36.634380    6746 start.go:729] Will try again in 5 seconds ...
	I0913 17:24:41.636515    6746 start.go:360] acquireMachinesLock for embed-certs-185000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:41.636955    6746 start.go:364] duration metric: took 343.25µs to acquireMachinesLock for "embed-certs-185000"
	I0913 17:24:41.637090    6746 start.go:93] Provisioning new machine with config: &{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:24:41.637390    6746 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:24:41.642980    6746 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:24:41.693698    6746 start.go:159] libmachine.API.Create for "embed-certs-185000" (driver="qemu2")
	I0913 17:24:41.693769    6746 client.go:168] LocalClient.Create starting
	I0913 17:24:41.693902    6746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:24:41.693980    6746 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:41.693998    6746 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:41.694065    6746 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:24:41.694115    6746 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:41.694133    6746 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:41.695067    6746 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:24:41.864078    6746 main.go:141] libmachine: Creating SSH key...
	I0913 17:24:41.991573    6746 main.go:141] libmachine: Creating Disk image...
	I0913 17:24:41.991579    6746 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:24:41.991743    6746 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2
	I0913 17:24:42.001237    6746 main.go:141] libmachine: STDOUT: 
	I0913 17:24:42.001259    6746 main.go:141] libmachine: STDERR: 
	I0913 17:24:42.001314    6746 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2 +20000M
	I0913 17:24:42.009128    6746 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:24:42.009147    6746 main.go:141] libmachine: STDERR: 
	I0913 17:24:42.009156    6746 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2
	I0913 17:24:42.009162    6746 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:24:42.009170    6746 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:42.009206    6746 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:bd:04:16:6f:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2
	I0913 17:24:42.010836    6746 main.go:141] libmachine: STDOUT: 
	I0913 17:24:42.010863    6746 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:42.010875    6746 client.go:171] duration metric: took 317.100666ms to LocalClient.Create
	I0913 17:24:44.012992    6746 start.go:128] duration metric: took 2.375649125s to createHost
	I0913 17:24:44.013055    6746 start.go:83] releasing machines lock for "embed-certs-185000", held for 2.376148916s
	W0913 17:24:44.013387    6746 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-185000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-185000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:44.028070    6746 out.go:201] 
	W0913 17:24:44.032238    6746 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:44.032265    6746 out.go:270] * 
	* 
	W0913 17:24:44.034958    6746 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:24:44.049019    6746 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (66.250709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-098000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-098000 --alsologtostderr -v=1: exit status 83 (45.447625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-098000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-098000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:34.142972    6747 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:34.143107    6747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:34.143110    6747 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:34.143113    6747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:34.143248    6747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:34.143467    6747 out.go:352] Setting JSON to false
	I0913 17:24:34.143473    6747 mustload.go:65] Loading cluster: no-preload-098000
	I0913 17:24:34.143704    6747 config.go:182] Loaded profile config "no-preload-098000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:34.147816    6747 out.go:177] * The control-plane node no-preload-098000 host is not running: state=Stopped
	I0913 17:24:34.151906    6747 out.go:177]   To start a cluster, run: "minikube start -p no-preload-098000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-098000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (36.479834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (36.578042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-098000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-865000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-865000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (11.952468792s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-865000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-865000" primary control-plane node in "default-k8s-diff-port-865000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-865000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:34.638056    6780 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:34.638179    6780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:34.638182    6780 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:34.638184    6780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:34.638306    6780 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:34.639546    6780 out.go:352] Setting JSON to false
	I0913 17:24:34.655569    6780 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5038,"bootTime":1726268436,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:24:34.655639    6780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:24:34.660834    6780 out.go:177] * [default-k8s-diff-port-865000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:24:34.671886    6780 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:24:34.671934    6780 notify.go:220] Checking for updates...
	I0913 17:24:34.678803    6780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:24:34.681873    6780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:24:34.684838    6780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:24:34.687856    6780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:24:34.689388    6780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:24:34.693244    6780 config.go:182] Loaded profile config "embed-certs-185000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:34.693308    6780 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:34.693349    6780 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:24:34.697825    6780 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:24:34.702852    6780 start.go:297] selected driver: qemu2
	I0913 17:24:34.702859    6780 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:24:34.702866    6780 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:24:34.705235    6780 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 17:24:34.707897    6780 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:24:34.710887    6780 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:24:34.710905    6780 cni.go:84] Creating CNI manager for ""
	I0913 17:24:34.710926    6780 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:24:34.710931    6780 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:24:34.710960    6780 start.go:340] cluster config:
	{Name:default-k8s-diff-port-865000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-865000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:34.714816    6780 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:34.718895    6780 out.go:177] * Starting "default-k8s-diff-port-865000" primary control-plane node in "default-k8s-diff-port-865000" cluster
	I0913 17:24:34.722883    6780 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:24:34.722904    6780 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:24:34.722916    6780 cache.go:56] Caching tarball of preloaded images
	I0913 17:24:34.722987    6780 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:24:34.722992    6780 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:24:34.723061    6780 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/default-k8s-diff-port-865000/config.json ...
	I0913 17:24:34.723072    6780 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/default-k8s-diff-port-865000/config.json: {Name:mkb0a65adb0e28ca9dbb6be0e9772e97e14920a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:24:34.723402    6780 start.go:360] acquireMachinesLock for default-k8s-diff-port-865000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:36.591740    6780 start.go:364] duration metric: took 1.868369625s to acquireMachinesLock for "default-k8s-diff-port-865000"
	I0913 17:24:36.591874    6780 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-865000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-865000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:24:36.592274    6780 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:24:36.600989    6780 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:24:36.649632    6780 start.go:159] libmachine.API.Create for "default-k8s-diff-port-865000" (driver="qemu2")
	I0913 17:24:36.649686    6780 client.go:168] LocalClient.Create starting
	I0913 17:24:36.649821    6780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:24:36.649883    6780 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:36.649908    6780 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:36.649970    6780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:24:36.650015    6780 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:36.650027    6780 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:36.650745    6780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:24:36.829402    6780 main.go:141] libmachine: Creating SSH key...
	I0913 17:24:37.070656    6780 main.go:141] libmachine: Creating Disk image...
	I0913 17:24:37.070665    6780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:24:37.070908    6780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2
	I0913 17:24:37.080696    6780 main.go:141] libmachine: STDOUT: 
	I0913 17:24:37.080725    6780 main.go:141] libmachine: STDERR: 
	I0913 17:24:37.080785    6780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2 +20000M
	I0913 17:24:37.088811    6780 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:24:37.088827    6780 main.go:141] libmachine: STDERR: 
	I0913 17:24:37.088840    6780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2
	I0913 17:24:37.088845    6780 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:24:37.088872    6780 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:37.088895    6780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b6:d7:1b:e9:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2
	I0913 17:24:37.090528    6780 main.go:141] libmachine: STDOUT: 
	I0913 17:24:37.090541    6780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:37.090568    6780 client.go:171] duration metric: took 440.888375ms to LocalClient.Create
	I0913 17:24:39.092702    6780 start.go:128] duration metric: took 2.500474917s to createHost
	I0913 17:24:39.092752    6780 start.go:83] releasing machines lock for "default-k8s-diff-port-865000", held for 2.501056708s
	W0913 17:24:39.092817    6780 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:39.105991    6780 out.go:177] * Deleting "default-k8s-diff-port-865000" in qemu2 ...
	W0913 17:24:39.146192    6780 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:39.146214    6780 start.go:729] Will try again in 5 seconds ...
	I0913 17:24:44.147080    6780 start.go:360] acquireMachinesLock for default-k8s-diff-port-865000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:44.147181    6780 start.go:364] duration metric: took 78.959µs to acquireMachinesLock for "default-k8s-diff-port-865000"
	I0913 17:24:44.147208    6780 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-865000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-865000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:24:44.147259    6780 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:24:44.154219    6780 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:24:44.170337    6780 start.go:159] libmachine.API.Create for "default-k8s-diff-port-865000" (driver="qemu2")
	I0913 17:24:44.170370    6780 client.go:168] LocalClient.Create starting
	I0913 17:24:44.170427    6780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:24:44.170455    6780 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:44.170464    6780 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:44.170501    6780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:24:44.170516    6780 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:44.170521    6780 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:44.170842    6780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:24:44.381197    6780 main.go:141] libmachine: Creating SSH key...
	I0913 17:24:44.500471    6780 main.go:141] libmachine: Creating Disk image...
	I0913 17:24:44.500481    6780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:24:44.500671    6780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2
	I0913 17:24:44.510017    6780 main.go:141] libmachine: STDOUT: 
	I0913 17:24:44.510031    6780 main.go:141] libmachine: STDERR: 
	I0913 17:24:44.510083    6780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2 +20000M
	I0913 17:24:44.517915    6780 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:24:44.517960    6780 main.go:141] libmachine: STDERR: 
	I0913 17:24:44.517976    6780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2
	I0913 17:24:44.517983    6780 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:24:44.517989    6780 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:44.518020    6780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:13:a4:8b:54:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2
	I0913 17:24:44.519754    6780 main.go:141] libmachine: STDOUT: 
	I0913 17:24:44.519768    6780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:44.519780    6780 client.go:171] duration metric: took 349.417541ms to LocalClient.Create
	I0913 17:24:46.522026    6780 start.go:128] duration metric: took 2.374791959s to createHost
	I0913 17:24:46.522131    6780 start.go:83] releasing machines lock for "default-k8s-diff-port-865000", held for 2.375014125s
	W0913 17:24:46.522486    6780 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-865000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-865000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:46.528325    6780 out.go:201] 
	W0913 17:24:46.536274    6780 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:46.536316    6780 out.go:270] * 
	* 
	W0913 17:24:46.538756    6780 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:24:46.548221    6780 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-865000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (65.069708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-185000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-185000 create -f testdata/busybox.yaml: exit status 1 (30.448167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-185000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-185000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (31.779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (34.3345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-185000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-185000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-185000 describe deploy/metrics-server -n kube-system: exit status 1 (29.616833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-185000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-185000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (38.14125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-865000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-865000 create -f testdata/busybox.yaml: exit status 1 (28.806042ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-865000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-865000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (30.332125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (29.230791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-865000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-865000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-865000 describe deploy/metrics-server -n kube-system: exit status 1 (26.95325ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-865000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-865000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (28.936458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.182208334s)

                                                
                                                
-- stdout --
	* [embed-certs-185000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-185000" primary control-plane node in "embed-certs-185000" cluster
	* Restarting existing qemu2 VM for "embed-certs-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:48.030394    6858 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:48.030545    6858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:48.030549    6858 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:48.030551    6858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:48.030694    6858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:48.031762    6858 out.go:352] Setting JSON to false
	I0913 17:24:48.048195    6858 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5052,"bootTime":1726268436,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:24:48.048264    6858 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:24:48.053711    6858 out.go:177] * [embed-certs-185000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:24:48.058445    6858 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:24:48.058517    6858 notify.go:220] Checking for updates...
	I0913 17:24:48.065690    6858 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:24:48.067146    6858 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:24:48.070709    6858 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:24:48.073665    6858 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:24:48.076733    6858 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:24:48.079934    6858 config.go:182] Loaded profile config "embed-certs-185000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:48.080219    6858 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:24:48.084716    6858 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:24:48.091628    6858 start.go:297] selected driver: qemu2
	I0913 17:24:48.091634    6858 start.go:901] validating driver "qemu2" against &{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:48.091684    6858 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:24:48.094055    6858 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:24:48.094079    6858 cni.go:84] Creating CNI manager for ""
	I0913 17:24:48.094102    6858 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:24:48.094123    6858 start.go:340] cluster config:
	{Name:embed-certs-185000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-185000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:48.097571    6858 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:48.103669    6858 out.go:177] * Starting "embed-certs-185000" primary control-plane node in "embed-certs-185000" cluster
	I0913 17:24:48.107650    6858 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:24:48.107665    6858 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:24:48.107676    6858 cache.go:56] Caching tarball of preloaded images
	I0913 17:24:48.107738    6858 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:24:48.107744    6858 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:24:48.107801    6858 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/embed-certs-185000/config.json ...
	I0913 17:24:48.108131    6858 start.go:360] acquireMachinesLock for embed-certs-185000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:48.108159    6858 start.go:364] duration metric: took 21.958µs to acquireMachinesLock for "embed-certs-185000"
	I0913 17:24:48.108167    6858 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:24:48.108172    6858 fix.go:54] fixHost starting: 
	I0913 17:24:48.108284    6858 fix.go:112] recreateIfNeeded on embed-certs-185000: state=Stopped err=<nil>
	W0913 17:24:48.108294    6858 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:24:48.112548    6858 out.go:177] * Restarting existing qemu2 VM for "embed-certs-185000" ...
	I0913 17:24:48.120631    6858 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:48.120669    6858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:bd:04:16:6f:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2
	I0913 17:24:48.122538    6858 main.go:141] libmachine: STDOUT: 
	I0913 17:24:48.122554    6858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:48.122585    6858 fix.go:56] duration metric: took 14.412209ms for fixHost
	I0913 17:24:48.122589    6858 start.go:83] releasing machines lock for "embed-certs-185000", held for 14.426417ms
	W0913 17:24:48.122594    6858 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:48.122624    6858 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:48.122628    6858 start.go:729] Will try again in 5 seconds ...
	I0913 17:24:53.124687    6858 start.go:360] acquireMachinesLock for embed-certs-185000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:53.125240    6858 start.go:364] duration metric: took 441.292µs to acquireMachinesLock for "embed-certs-185000"
	I0913 17:24:53.125396    6858 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:24:53.125414    6858 fix.go:54] fixHost starting: 
	I0913 17:24:53.126177    6858 fix.go:112] recreateIfNeeded on embed-certs-185000: state=Stopped err=<nil>
	W0913 17:24:53.126202    6858 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:24:53.134732    6858 out.go:177] * Restarting existing qemu2 VM for "embed-certs-185000" ...
	I0913 17:24:53.138771    6858 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:53.138989    6858 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:bd:04:16:6f:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/embed-certs-185000/disk.qcow2
	I0913 17:24:53.148550    6858 main.go:141] libmachine: STDOUT: 
	I0913 17:24:53.148647    6858 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:53.148746    6858 fix.go:56] duration metric: took 23.318875ms for fixHost
	I0913 17:24:53.148769    6858 start.go:83] releasing machines lock for "embed-certs-185000", held for 23.50325ms
	W0913 17:24:53.149086    6858 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-185000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-185000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:53.156720    6858 out.go:201] 
	W0913 17:24:53.160848    6858 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:53.160881    6858 out.go:270] * 
	* 
	W0913 17:24:53.163337    6858 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:24:53.170809    6858 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-185000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (66.34325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-865000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-865000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (7.293446459s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-865000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-865000" primary control-plane node in "default-k8s-diff-port-865000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-865000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-865000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:49.039704    6873 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:49.039821    6873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:49.039824    6873 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:49.039827    6873 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:49.039962    6873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:49.041014    6873 out.go:352] Setting JSON to false
	I0913 17:24:49.057248    6873 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5053,"bootTime":1726268436,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:24:49.057345    6873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:24:49.062771    6873 out.go:177] * [default-k8s-diff-port-865000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:24:49.068729    6873 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:24:49.068770    6873 notify.go:220] Checking for updates...
	I0913 17:24:49.075725    6873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:24:49.078755    6873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:24:49.081720    6873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:24:49.084717    6873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:24:49.087679    6873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:24:49.091013    6873 config.go:182] Loaded profile config "default-k8s-diff-port-865000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:49.091261    6873 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:24:49.094668    6873 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:24:49.101746    6873 start.go:297] selected driver: qemu2
	I0913 17:24:49.101753    6873 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-865000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-865000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:49.101851    6873 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:24:49.104166    6873 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 17:24:49.104189    6873 cni.go:84] Creating CNI manager for ""
	I0913 17:24:49.104212    6873 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:24:49.104242    6873 start.go:340] cluster config:
	{Name:default-k8s-diff-port-865000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-865000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:49.107756    6873 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:49.115696    6873 out.go:177] * Starting "default-k8s-diff-port-865000" primary control-plane node in "default-k8s-diff-port-865000" cluster
	I0913 17:24:49.119591    6873 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:24:49.119615    6873 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:24:49.119626    6873 cache.go:56] Caching tarball of preloaded images
	I0913 17:24:49.119672    6873 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:24:49.119677    6873 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:24:49.119737    6873 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/default-k8s-diff-port-865000/config.json ...
	I0913 17:24:49.120209    6873 start.go:360] acquireMachinesLock for default-k8s-diff-port-865000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:49.120236    6873 start.go:364] duration metric: took 21.375µs to acquireMachinesLock for "default-k8s-diff-port-865000"
	I0913 17:24:49.120245    6873 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:24:49.120250    6873 fix.go:54] fixHost starting: 
	I0913 17:24:49.120360    6873 fix.go:112] recreateIfNeeded on default-k8s-diff-port-865000: state=Stopped err=<nil>
	W0913 17:24:49.120368    6873 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:24:49.123708    6873 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-865000" ...
	I0913 17:24:49.131669    6873 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:49.131707    6873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:13:a4:8b:54:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2
	I0913 17:24:49.133578    6873 main.go:141] libmachine: STDOUT: 
	I0913 17:24:49.133592    6873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:49.133620    6873 fix.go:56] duration metric: took 13.370083ms for fixHost
	I0913 17:24:49.133625    6873 start.go:83] releasing machines lock for "default-k8s-diff-port-865000", held for 13.385458ms
	W0913 17:24:49.133631    6873 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:49.133669    6873 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:49.133673    6873 start.go:729] Will try again in 5 seconds ...
	I0913 17:24:54.135584    6873 start.go:360] acquireMachinesLock for default-k8s-diff-port-865000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:56.228246    6873 start.go:364] duration metric: took 2.092680417s to acquireMachinesLock for "default-k8s-diff-port-865000"
	I0913 17:24:56.228448    6873 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:24:56.228470    6873 fix.go:54] fixHost starting: 
	I0913 17:24:56.229282    6873 fix.go:112] recreateIfNeeded on default-k8s-diff-port-865000: state=Stopped err=<nil>
	W0913 17:24:56.229311    6873 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:24:56.245831    6873 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-865000" ...
	I0913 17:24:56.254913    6873 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:56.255069    6873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:13:a4:8b:54:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/default-k8s-diff-port-865000/disk.qcow2
	I0913 17:24:56.265235    6873 main.go:141] libmachine: STDOUT: 
	I0913 17:24:56.265333    6873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:56.265421    6873 fix.go:56] duration metric: took 36.952458ms for fixHost
	I0913 17:24:56.265441    6873 start.go:83] releasing machines lock for "default-k8s-diff-port-865000", held for 37.134083ms
	W0913 17:24:56.265638    6873 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-865000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-865000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:56.273914    6873 out.go:201] 
	W0913 17:24:56.278768    6873 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:24:56.278805    6873 out.go:270] * 
	* 
	W0913 17:24:56.280972    6873 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:24:56.289860    6873 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-865000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (60.097542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-185000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (32.169667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-185000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-185000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-185000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.276084ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-185000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-185000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (29.352042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-185000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (29.558334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-185000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-185000 --alsologtostderr -v=1: exit status 83 (38.935625ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-185000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-185000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:53.439348    6892 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:53.439509    6892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:53.439513    6892 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:53.439515    6892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:53.439640    6892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:53.439844    6892 out.go:352] Setting JSON to false
	I0913 17:24:53.439850    6892 mustload.go:65] Loading cluster: embed-certs-185000
	I0913 17:24:53.440051    6892 config.go:182] Loaded profile config "embed-certs-185000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:53.441972    6892 out.go:177] * The control-plane node embed-certs-185000 host is not running: state=Stopped
	I0913 17:24:53.445663    6892 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-185000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-185000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (28.941792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (28.92625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-516000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-516000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.985305208s)

                                                
                                                
-- stdout --
	* [newest-cni-516000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-516000" primary control-plane node in "newest-cni-516000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-516000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:53.752531    6909 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:53.752690    6909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:53.752694    6909 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:53.752697    6909 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:53.752827    6909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:53.754026    6909 out.go:352] Setting JSON to false
	I0913 17:24:53.769981    6909 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5057,"bootTime":1726268436,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:24:53.770053    6909 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:24:53.774741    6909 out.go:177] * [newest-cni-516000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:24:53.780605    6909 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:24:53.780738    6909 notify.go:220] Checking for updates...
	I0913 17:24:53.786551    6909 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:24:53.789624    6909 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:24:53.792725    6909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:24:53.794244    6909 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:24:53.797604    6909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:24:53.800999    6909 config.go:182] Loaded profile config "default-k8s-diff-port-865000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:53.801058    6909 config.go:182] Loaded profile config "multinode-984000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:53.801102    6909 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:24:53.805480    6909 out.go:177] * Using the qemu2 driver based on user configuration
	I0913 17:24:53.812635    6909 start.go:297] selected driver: qemu2
	I0913 17:24:53.812644    6909 start.go:901] validating driver "qemu2" against <nil>
	I0913 17:24:53.812651    6909 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:24:53.814868    6909 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0913 17:24:53.814909    6909 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0913 17:24:53.823583    6909 out.go:177] * Automatically selected the socket_vmnet network
	I0913 17:24:53.826681    6909 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0913 17:24:53.826697    6909 cni.go:84] Creating CNI manager for ""
	I0913 17:24:53.826720    6909 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:24:53.826729    6909 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 17:24:53.826755    6909 start.go:340] cluster config:
	{Name:newest-cni-516000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:24:53.830776    6909 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:24:53.842619    6909 out.go:177] * Starting "newest-cni-516000" primary control-plane node in "newest-cni-516000" cluster
	I0913 17:24:53.846623    6909 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:24:53.846642    6909 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:24:53.846658    6909 cache.go:56] Caching tarball of preloaded images
	I0913 17:24:53.846746    6909 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:24:53.846752    6909 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:24:53.846814    6909 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/newest-cni-516000/config.json ...
	I0913 17:24:53.846825    6909 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/newest-cni-516000/config.json: {Name:mk39d281aa092376ded134ca6e80e810e2e3efef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 17:24:53.847060    6909 start.go:360] acquireMachinesLock for newest-cni-516000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:24:53.847096    6909 start.go:364] duration metric: took 29.541µs to acquireMachinesLock for "newest-cni-516000"
	I0913 17:24:53.847108    6909 start.go:93] Provisioning new machine with config: &{Name:newest-cni-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:24:53.847141    6909 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:24:53.854684    6909 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:24:53.873215    6909 start.go:159] libmachine.API.Create for "newest-cni-516000" (driver="qemu2")
	I0913 17:24:53.873243    6909 client.go:168] LocalClient.Create starting
	I0913 17:24:53.873318    6909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:24:53.873346    6909 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:53.873355    6909 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:53.873393    6909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:24:53.873416    6909 main.go:141] libmachine: Decoding PEM data...
	I0913 17:24:53.873422    6909 main.go:141] libmachine: Parsing certificate...
	I0913 17:24:53.873886    6909 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:24:54.034632    6909 main.go:141] libmachine: Creating SSH key...
	I0913 17:24:54.206306    6909 main.go:141] libmachine: Creating Disk image...
	I0913 17:24:54.206312    6909 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:24:54.206502    6909 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2
	I0913 17:24:54.215946    6909 main.go:141] libmachine: STDOUT: 
	I0913 17:24:54.215963    6909 main.go:141] libmachine: STDERR: 
	I0913 17:24:54.216021    6909 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2 +20000M
	I0913 17:24:54.223949    6909 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:24:54.223970    6909 main.go:141] libmachine: STDERR: 
	I0913 17:24:54.223981    6909 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2
	I0913 17:24:54.223987    6909 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:24:54.223995    6909 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:24:54.224020    6909 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:bd:d1:99:fa:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2
	I0913 17:24:54.225738    6909 main.go:141] libmachine: STDOUT: 
	I0913 17:24:54.225816    6909 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:24:54.225838    6909 client.go:171] duration metric: took 352.598667ms to LocalClient.Create
	I0913 17:24:56.227995    6909 start.go:128] duration metric: took 2.380897167s to createHost
	I0913 17:24:56.228097    6909 start.go:83] releasing machines lock for "newest-cni-516000", held for 2.381065708s
	W0913 17:24:56.228173    6909 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:56.250943    6909 out.go:177] * Deleting "newest-cni-516000" in qemu2 ...
	W0913 17:24:56.309189    6909 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:24:56.309222    6909 start.go:729] Will try again in 5 seconds ...
	I0913 17:25:01.311330    6909 start.go:360] acquireMachinesLock for newest-cni-516000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:25:01.311857    6909 start.go:364] duration metric: took 409.916µs to acquireMachinesLock for "newest-cni-516000"
	I0913 17:25:01.311986    6909 start.go:93] Provisioning new machine with config: &{Name:newest-cni-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 17:25:01.312291    6909 start.go:125] createHost starting for "" (driver="qemu2")
	I0913 17:25:01.317901    6909 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0913 17:25:01.369942    6909 start.go:159] libmachine.API.Create for "newest-cni-516000" (driver="qemu2")
	I0913 17:25:01.369986    6909 client.go:168] LocalClient.Create starting
	I0913 17:25:01.370119    6909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/ca.pem
	I0913 17:25:01.370192    6909 main.go:141] libmachine: Decoding PEM data...
	I0913 17:25:01.370208    6909 main.go:141] libmachine: Parsing certificate...
	I0913 17:25:01.370276    6909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19640-1360/.minikube/certs/cert.pem
	I0913 17:25:01.370320    6909 main.go:141] libmachine: Decoding PEM data...
	I0913 17:25:01.370335    6909 main.go:141] libmachine: Parsing certificate...
	I0913 17:25:01.370980    6909 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso...
	I0913 17:25:01.539938    6909 main.go:141] libmachine: Creating SSH key...
	I0913 17:25:01.638243    6909 main.go:141] libmachine: Creating Disk image...
	I0913 17:25:01.638252    6909 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0913 17:25:01.638421    6909 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2.raw /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2
	I0913 17:25:01.647857    6909 main.go:141] libmachine: STDOUT: 
	I0913 17:25:01.647874    6909 main.go:141] libmachine: STDERR: 
	I0913 17:25:01.647948    6909 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2 +20000M
	I0913 17:25:01.655983    6909 main.go:141] libmachine: STDOUT: Image resized.
	
	I0913 17:25:01.655997    6909 main.go:141] libmachine: STDERR: 
	I0913 17:25:01.656008    6909 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2
	I0913 17:25:01.656013    6909 main.go:141] libmachine: Starting QEMU VM...
	I0913 17:25:01.656023    6909 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:25:01.656065    6909 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ef:86:ca:be:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2
	I0913 17:25:01.657754    6909 main.go:141] libmachine: STDOUT: 
	I0913 17:25:01.657768    6909 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:25:01.657792    6909 client.go:171] duration metric: took 287.80725ms to LocalClient.Create
	I0913 17:25:03.659907    6909 start.go:128] duration metric: took 2.347660542s to createHost
	I0913 17:25:03.659990    6909 start.go:83] releasing machines lock for "newest-cni-516000", held for 2.34818125s
	W0913 17:25:03.660372    6909 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:25:03.676156    6909 out.go:201] 
	W0913 17:25:03.679297    6909 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:25:03.679368    6909 out.go:270] * 
	* 
	W0913 17:25:03.682123    6909 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:25:03.697991    6909 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-516000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000: exit status 7 (72.213584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-865000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (31.536958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-865000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-865000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-865000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.760959ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-865000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-865000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (29.723584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-865000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (29.2625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-865000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-865000 --alsologtostderr -v=1: exit status 83 (48.384334ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-865000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-865000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:24:56.552098    6931 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:24:56.552237    6931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:56.552240    6931 out.go:358] Setting ErrFile to fd 2...
	I0913 17:24:56.552242    6931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:24:56.552362    6931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:24:56.552564    6931 out.go:352] Setting JSON to false
	I0913 17:24:56.552569    6931 mustload.go:65] Loading cluster: default-k8s-diff-port-865000
	I0913 17:24:56.552777    6931 config.go:182] Loaded profile config "default-k8s-diff-port-865000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:24:56.556226    6931 out.go:177] * The control-plane node default-k8s-diff-port-865000 host is not running: state=Stopped
	I0913 17:24:56.567355    6931 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-865000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-865000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (29.304875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (29.21775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-865000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-516000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-516000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.180648625s)

                                                
                                                
-- stdout --
	* [newest-cni-516000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-516000" primary control-plane node in "newest-cni-516000" cluster
	* Restarting existing qemu2 VM for "newest-cni-516000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-516000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:25:07.226953    6978 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:25:07.227091    6978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:25:07.227095    6978 out.go:358] Setting ErrFile to fd 2...
	I0913 17:25:07.227097    6978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:25:07.227237    6978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:25:07.228283    6978 out.go:352] Setting JSON to false
	I0913 17:25:07.244540    6978 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5071,"bootTime":1726268436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 17:25:07.244608    6978 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 17:25:07.249585    6978 out.go:177] * [newest-cni-516000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 17:25:07.256525    6978 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 17:25:07.256588    6978 notify.go:220] Checking for updates...
	I0913 17:25:07.263579    6978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 17:25:07.266566    6978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 17:25:07.269584    6978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 17:25:07.272554    6978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 17:25:07.274016    6978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 17:25:07.277831    6978 config.go:182] Loaded profile config "newest-cni-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:25:07.278122    6978 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 17:25:07.282510    6978 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 17:25:07.287538    6978 start.go:297] selected driver: qemu2
	I0913 17:25:07.287544    6978 start.go:901] validating driver "qemu2" against &{Name:newest-cni-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:25:07.287588    6978 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 17:25:07.289978    6978 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0913 17:25:07.290002    6978 cni.go:84] Creating CNI manager for ""
	I0913 17:25:07.290027    6978 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 17:25:07.290053    6978 start.go:340] cluster config:
	{Name:newest-cni-516000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-516000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 17:25:07.293546    6978 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 17:25:07.301547    6978 out.go:177] * Starting "newest-cni-516000" primary control-plane node in "newest-cni-516000" cluster
	I0913 17:25:07.305569    6978 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 17:25:07.305585    6978 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 17:25:07.305603    6978 cache.go:56] Caching tarball of preloaded images
	I0913 17:25:07.305665    6978 preload.go:172] Found /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 17:25:07.305670    6978 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 17:25:07.305732    6978 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/newest-cni-516000/config.json ...
	I0913 17:25:07.306255    6978 start.go:360] acquireMachinesLock for newest-cni-516000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:25:07.306282    6978 start.go:364] duration metric: took 20.916µs to acquireMachinesLock for "newest-cni-516000"
	I0913 17:25:07.306290    6978 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:25:07.306295    6978 fix.go:54] fixHost starting: 
	I0913 17:25:07.306412    6978 fix.go:112] recreateIfNeeded on newest-cni-516000: state=Stopped err=<nil>
	W0913 17:25:07.306420    6978 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:25:07.309654    6978 out.go:177] * Restarting existing qemu2 VM for "newest-cni-516000" ...
	I0913 17:25:07.317583    6978 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:25:07.317625    6978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ef:86:ca:be:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2
	I0913 17:25:07.319565    6978 main.go:141] libmachine: STDOUT: 
	I0913 17:25:07.319582    6978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:25:07.319612    6978 fix.go:56] duration metric: took 13.315667ms for fixHost
	I0913 17:25:07.319616    6978 start.go:83] releasing machines lock for "newest-cni-516000", held for 13.32975ms
	W0913 17:25:07.319620    6978 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:25:07.319656    6978 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:25:07.319661    6978 start.go:729] Will try again in 5 seconds ...
	I0913 17:25:12.321100    6978 start.go:360] acquireMachinesLock for newest-cni-516000: {Name:mkf2e4b1ec539dd640dccbe8ce1fbd93eb6c1f08 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 17:25:12.321476    6978 start.go:364] duration metric: took 289.292µs to acquireMachinesLock for "newest-cni-516000"
	I0913 17:25:12.321599    6978 start.go:96] Skipping create...Using existing machine configuration
	I0913 17:25:12.321618    6978 fix.go:54] fixHost starting: 
	I0913 17:25:12.322302    6978 fix.go:112] recreateIfNeeded on newest-cni-516000: state=Stopped err=<nil>
	W0913 17:25:12.322326    6978 fix.go:138] unexpected machine state, will restart: <nil>
	I0913 17:25:12.331875    6978 out.go:177] * Restarting existing qemu2 VM for "newest-cni-516000" ...
	I0913 17:25:12.333454    6978 qemu.go:418] Using hvf for hardware acceleration
	I0913 17:25:12.333689    6978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:ef:86:ca:be:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19640-1360/.minikube/machines/newest-cni-516000/disk.qcow2
	I0913 17:25:12.342574    6978 main.go:141] libmachine: STDOUT: 
	I0913 17:25:12.342672    6978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0913 17:25:12.342800    6978 fix.go:56] duration metric: took 21.18225ms for fixHost
	I0913 17:25:12.342819    6978 start.go:83] releasing machines lock for "newest-cni-516000", held for 21.321792ms
	W0913 17:25:12.343021    6978 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-516000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-516000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0913 17:25:12.351853    6978 out.go:201] 
	W0913 17:25:12.355842    6978 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0913 17:25:12.355865    6978 out.go:270] * 
	* 
	W0913 17:25:12.358447    6978 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 17:25:12.366864    6978 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-516000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000: exit status 7 (72.041375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-516000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000: exit status 7 (30.342208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-516000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-516000 --alsologtostderr -v=1: exit status 83 (43.068209ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-516000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-516000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 17:25:12.555623    6992 out.go:345] Setting OutFile to fd 1 ...
	I0913 17:25:12.555784    6992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:25:12.555787    6992 out.go:358] Setting ErrFile to fd 2...
	I0913 17:25:12.555790    6992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 17:25:12.555919    6992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 17:25:12.556135    6992 out.go:352] Setting JSON to false
	I0913 17:25:12.556141    6992 mustload.go:65] Loading cluster: newest-cni-516000
	I0913 17:25:12.556391    6992 config.go:182] Loaded profile config "newest-cni-516000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 17:25:12.560277    6992 out.go:177] * The control-plane node newest-cni-516000 host is not running: state=Stopped
	I0913 17:25:12.564189    6992 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-516000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-516000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000: exit status 7 (30.005334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-516000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000: exit status 7 (30.460166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-516000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (154/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.1/json-events 9.17
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 198.87
29 TestAddons/serial/Volcano 37.42
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 18.53
35 TestAddons/parallel/InspektorGadget 10.29
36 TestAddons/parallel/MetricsServer 5.34
39 TestAddons/parallel/CSI 34.06
40 TestAddons/parallel/Headlamp 16.62
41 TestAddons/parallel/CloudSpanner 5.2
42 TestAddons/parallel/LocalPath 40.89
43 TestAddons/parallel/NvidiaDevicePlugin 6.19
44 TestAddons/parallel/Yakd 10.23
45 TestAddons/StoppedEnableDisable 9.39
53 TestHyperKitDriverInstallOrUpdate 10.35
56 TestErrorSpam/setup 35.42
57 TestErrorSpam/start 0.34
58 TestErrorSpam/status 0.25
59 TestErrorSpam/pause 0.66
60 TestErrorSpam/unpause 0.64
61 TestErrorSpam/stop 55.33
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 48.82
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 35.05
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.04
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.72
73 TestFunctional/serial/CacheCmd/cache/add_local 1.67
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
77 TestFunctional/serial/CacheCmd/cache/cache_reload 0.64
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 2.2
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.01
81 TestFunctional/serial/ExtraConfig 33.32
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.64
84 TestFunctional/serial/LogsFileCmd 0.61
85 TestFunctional/serial/InvalidService 3.71
87 TestFunctional/parallel/ConfigCmd 0.23
88 TestFunctional/parallel/DashboardCmd 8.44
89 TestFunctional/parallel/DryRun 0.23
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.24
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 24.05
99 TestFunctional/parallel/SSHCmd 0.12
100 TestFunctional/parallel/CpCmd 0.39
102 TestFunctional/parallel/FileSync 0.06
103 TestFunctional/parallel/CertSync 0.38
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
111 TestFunctional/parallel/License 0.27
112 TestFunctional/parallel/Version/short 0.09
113 TestFunctional/parallel/Version/components 0.21
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.06
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.88
119 TestFunctional/parallel/ImageCommands/Setup 1.71
120 TestFunctional/parallel/DockerEnv/bash 0.26
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/ServiceCmd/DeployApp 12.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.44
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.28
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.12
137 TestFunctional/parallel/ServiceCmd/List 0.12
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.09
141 TestFunctional/parallel/ServiceCmd/URL 0.09
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.11
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 5.23
152 TestFunctional/parallel/MountCmd/specific-port 1.65
153 TestFunctional/parallel/MountCmd/VerifyCleanup 0.77
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 178.22
161 TestMultiControlPlane/serial/DeployApp 5.15
162 TestMultiControlPlane/serial/PingHostFromPods 0.74
163 TestMultiControlPlane/serial/AddWorkerNode 54.88
164 TestMultiControlPlane/serial/NodeLabels 0.14
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.23
166 TestMultiControlPlane/serial/CopyFile 4.21
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 79.45
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 1.95
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
259 TestStoppedBinaryUpgrade/Setup 0.99
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
276 TestNoKubernetes/serial/ProfileList 31.48
277 TestNoKubernetes/serial/Stop 3.31
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.71
294 TestStartStop/group/old-k8s-version/serial/Stop 3.67
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
305 TestStartStop/group/no-preload/serial/Stop 3.31
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
318 TestStartStop/group/embed-certs/serial/Stop 3.51
321 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.06
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 3.23
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-882000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-882000: exit status 85 (92.814125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-882000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT |          |
	|         | -p download-only-882000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 16:25:32
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 16:25:32.583864    1884 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:25:32.584007    1884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:32.584010    1884 out.go:358] Setting ErrFile to fd 2...
	I0913 16:25:32.584013    1884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:32.584161    1884 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	W0913 16:25:32.584254    1884 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19640-1360/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19640-1360/.minikube/config/config.json: no such file or directory
	I0913 16:25:32.585532    1884 out.go:352] Setting JSON to true
	I0913 16:25:32.602884    1884 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1496,"bootTime":1726268436,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 16:25:32.602956    1884 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 16:25:32.607382    1884 out.go:97] [download-only-882000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 16:25:32.607511    1884 notify.go:220] Checking for updates...
	W0913 16:25:32.607530    1884 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 16:25:32.610200    1884 out.go:169] MINIKUBE_LOCATION=19640
	I0913 16:25:32.613303    1884 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 16:25:32.617371    1884 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 16:25:32.620305    1884 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 16:25:32.623313    1884 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	W0913 16:25:32.627806    1884 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 16:25:32.628018    1884 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 16:25:32.633229    1884 out.go:97] Using the qemu2 driver based on user configuration
	I0913 16:25:32.633247    1884 start.go:297] selected driver: qemu2
	I0913 16:25:32.633260    1884 start.go:901] validating driver "qemu2" against <nil>
	I0913 16:25:32.633326    1884 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 16:25:32.636308    1884 out.go:169] Automatically selected the socket_vmnet network
	I0913 16:25:32.641961    1884 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0913 16:25:32.642046    1884 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 16:25:32.642094    1884 cni.go:84] Creating CNI manager for ""
	I0913 16:25:32.642128    1884 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 16:25:32.642175    1884 start.go:340] cluster config:
	{Name:download-only-882000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-882000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 16:25:32.647251    1884 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 16:25:32.650360    1884 out.go:97] Downloading VM boot image ...
	I0913 16:25:32.650377    1884 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/iso/arm64/minikube-v1.34.0-1726243933-19640-arm64.iso
	I0913 16:25:40.753408    1884 out.go:97] Starting "download-only-882000" primary control-plane node in "download-only-882000" cluster
	I0913 16:25:40.753429    1884 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 16:25:40.818284    1884 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 16:25:40.818300    1884 cache.go:56] Caching tarball of preloaded images
	I0913 16:25:40.818499    1884 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 16:25:40.822674    1884 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 16:25:40.822681    1884 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 16:25:40.900486    1884 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 16:25:48.206215    1884 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 16:25:48.206392    1884 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 16:25:48.903997    1884 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 16:25:48.904198    1884 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/download-only-882000/config.json ...
	I0913 16:25:48.904215    1884 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/download-only-882000/config.json: {Name:mk58a2c4a4c645f58b2f0c31f52a004fa38a922f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 16:25:48.904447    1884 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 16:25:48.904638    1884 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0913 16:25:49.384408    1884 out.go:193] 
	W0913 16:25:49.389405    1884 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19640-1360/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106939780 0x106939780 0x106939780 0x106939780 0x106939780 0x106939780 0x106939780] Decompressors:map[bz2:0x140003bd540 gz:0x140003bd548 tar:0x140003bd4f0 tar.bz2:0x140003bd500 tar.gz:0x140003bd510 tar.xz:0x140003bd520 tar.zst:0x140003bd530 tbz2:0x140003bd500 tgz:0x140003bd510 txz:0x140003bd520 tzst:0x140003bd530 xz:0x140003bd550 zip:0x140003bd560 zst:0x140003bd558] Getters:map[file:0x14001462550 http:0x14000828190 https:0x140008281e0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0913 16:25:49.389431    1884 out_reason.go:110] 
	W0913 16:25:49.396189    1884 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0913 16:25:49.400467    1884 out.go:193] 
	
	
	* The control-plane node download-only-882000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-882000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-882000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (9.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-302000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-302000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (9.169871917s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (9.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-302000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-302000: exit status 85 (79.473916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-882000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT |                     |
	|         | -p download-only-882000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:25 PDT |
	| delete  | -p download-only-882000        | download-only-882000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT | 13 Sep 24 16:25 PDT |
	| start   | -o=json --download-only        | download-only-302000 | jenkins | v1.34.0 | 13 Sep 24 16:25 PDT |                     |
	|         | -p download-only-302000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 16:25:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 16:25:49.815806    1911 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:25:49.815927    1911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:49.815930    1911 out.go:358] Setting ErrFile to fd 2...
	I0913 16:25:49.815933    1911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:25:49.816064    1911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 16:25:49.817162    1911 out.go:352] Setting JSON to true
	I0913 16:25:49.833467    1911 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1513,"bootTime":1726268436,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 16:25:49.833532    1911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 16:25:49.837609    1911 out.go:97] [download-only-302000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 16:25:49.837730    1911 notify.go:220] Checking for updates...
	I0913 16:25:49.840270    1911 out.go:169] MINIKUBE_LOCATION=19640
	I0913 16:25:49.843390    1911 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 16:25:49.847410    1911 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 16:25:49.850272    1911 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 16:25:49.853401    1911 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	W0913 16:25:49.857838    1911 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 16:25:49.857982    1911 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 16:25:49.861337    1911 out.go:97] Using the qemu2 driver based on user configuration
	I0913 16:25:49.861345    1911 start.go:297] selected driver: qemu2
	I0913 16:25:49.861349    1911 start.go:901] validating driver "qemu2" against <nil>
	I0913 16:25:49.861393    1911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 16:25:49.864414    1911 out.go:169] Automatically selected the socket_vmnet network
	I0913 16:25:49.869724    1911 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0913 16:25:49.869827    1911 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 16:25:49.869851    1911 cni.go:84] Creating CNI manager for ""
	I0913 16:25:49.869877    1911 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 16:25:49.869884    1911 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 16:25:49.869928    1911 start.go:340] cluster config:
	{Name:download-only-302000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-302000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 16:25:49.873496    1911 iso.go:125] acquiring lock: {Name:mk3c320e1155803eb8e6b109254f4d2555e620c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 16:25:49.876310    1911 out.go:97] Starting "download-only-302000" primary control-plane node in "download-only-302000" cluster
	I0913 16:25:49.876317    1911 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 16:25:49.927244    1911 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 16:25:49.927264    1911 cache.go:56] Caching tarball of preloaded images
	I0913 16:25:49.927407    1911 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 16:25:49.931401    1911 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0913 16:25:49.931407    1911 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0913 16:25:50.014562    1911 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19640-1360/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-302000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-302000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-302000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-979000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-979000: exit status 85 (59.946458ms)

                                                
                                                
-- stdout --
	* Profile "addons-979000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-979000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-979000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-979000: exit status 85 (55.997042ms)

                                                
                                                
-- stdout --
	* Profile "addons-979000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-979000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (198.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-979000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-979000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m18.87434625s)
--- PASS: TestAddons/Setup (198.87s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.42s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 6.435375ms
addons_test.go:905: volcano-admission stabilized in 6.472958ms
addons_test.go:913: volcano-controller stabilized in 6.485625ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-bzmdb" [f936f0cb-3a44-4277-a585-baf709c17973] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005224667s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-rqsps" [dcc14c62-eb6b-4da6-b50f-76dfe97562a9] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004692208s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-8zcjg" [e74fab08-0b7f-4ee8-93cf-3785d45fa061] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004910459s
addons_test.go:932: (dbg) Run:  kubectl --context addons-979000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-979000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-979000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f9e33f9c-7068-43c6-9f10-71b0ffbdc0a5] Pending
helpers_test.go:344: "test-job-nginx-0" [f9e33f9c-7068-43c6-9f10-71b0ffbdc0a5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f9e33f9c-7068-43c6-9f10-71b0ffbdc0a5] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004057458s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-979000 addons disable volcano --alsologtostderr -v=1: (10.18865075s)
--- PASS: TestAddons/serial/Volcano (37.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-979000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-979000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-979000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-979000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-979000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [da04e5cb-fe76-4e80-9b2b-768559a1d100] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [da04e5cb-fe76-4e80-9b2b-768559a1d100] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005869125s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-979000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-979000 addons disable ingress --alsologtostderr -v=1: (7.374217458s)
--- PASS: TestAddons/parallel/Ingress (18.53s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qjjcz" [81da4cd2-e996-45f3-85f5-d84b3f3bf929] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0128955s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-979000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-979000: (5.279023709s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.2735ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-rwttn" [ce61eae0-27a9-429c-a37b-50d96bcabb99] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010127666s
addons_test.go:417: (dbg) Run:  kubectl --context addons-979000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 2.326375ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-979000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-979000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [037393b4-bff6-4b5f-b657-0a8e1c28a792] Pending
helpers_test.go:344: "task-pv-pod" [037393b4-bff6-4b5f-b657-0a8e1c28a792] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [037393b4-bff6-4b5f-b657-0a8e1c28a792] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.009092375s
addons_test.go:590: (dbg) Run:  kubectl --context addons-979000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-979000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-979000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-979000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-979000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-979000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-979000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4f85ff86-5774-4fe8-a6dd-b086da8952be] Pending
helpers_test.go:344: "task-pv-pod-restore" [4f85ff86-5774-4fe8-a6dd-b086da8952be] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4f85ff86-5774-4fe8-a6dd-b086da8952be] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.012067541s
addons_test.go:632: (dbg) Run:  kubectl --context addons-979000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-979000 delete pod task-pv-pod-restore: (1.061723167s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-979000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-979000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-979000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.13127925s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (34.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-979000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-5wqxr" [5c550546-3b9a-4663-8b02-b14229663869] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-5wqxr" [5c550546-3b9a-4663-8b02-b14229663869] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.0104915s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-979000 addons disable headlamp --alsologtostderr -v=1: (5.244755416s)
--- PASS: TestAddons/parallel/Headlamp (16.62s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-b74dm" [fc18cdc6-859c-4add-99cb-11df259d2ccc] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006918292s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-979000
--- PASS: TestAddons/parallel/CloudSpanner (5.20s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.89s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-979000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-979000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-979000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fb47e54f-7fcc-4105-b5d5-e2e4b75f3016] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fb47e54f-7fcc-4105-b5d5-e2e4b75f3016] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fb47e54f-7fcc-4105-b5d5-e2e4b75f3016] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003997167s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-979000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 ssh "cat /opt/local-path-provisioner/pvc-48e892b2-e457-47ef-ab8f-c1a5597bc326_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-979000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-979000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-arm64 -p addons-979000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.410695334s)
--- PASS: TestAddons/parallel/LocalPath (40.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-g7fjk" [f7e430eb-3175-4af5-b895-1abcccee28a5] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.011008959s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-979000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-2tbjk" [0236a265-b8b6-4631-991b-c7c71d4667f2] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00583625s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-979000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-979000 addons disable yakd --alsologtostderr -v=1: (5.224525708s)
--- PASS: TestAddons/parallel/Yakd (10.23s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (9.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-979000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-979000: (9.205574583s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-979000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-979000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-979000
--- PASS: TestAddons/StoppedEnableDisable (9.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.35s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.35s)

                                                
                                    
x
+
TestErrorSpam/setup (35.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-220000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-220000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 --driver=qemu2 : (35.418954542s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (35.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 pause
--- PASS: TestErrorSpam/pause (0.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 unpause
--- PASS: TestErrorSpam/unpause (0.64s)

                                                
                                    
x
+
TestErrorSpam/stop (55.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 stop: (3.196332708s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 stop: (26.062060625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-220000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-220000 stop: (26.069488041s)
--- PASS: TestErrorSpam/stop (55.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19640-1360/.minikube/files/etc/test/nested/copy/1882/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-830000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-830000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (48.816619791s)
--- PASS: TestFunctional/serial/StartWithProxy (48.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-830000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-830000 --alsologtostderr -v=8: (35.053883458s)
functional_test.go:663: soft start took 35.054366541s for "functional-830000" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-830000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2666498097/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 cache add minikube-local-cache-test:functional-830000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-arm64 -p functional-830000 cache add minikube-local-cache-test:functional-830000: (1.353818709s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 cache delete minikube-local-cache-test:functional-830000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-830000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (67.165791ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 kubectl -- --context functional-830000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-830000 kubectl -- --context functional-830000 get pods: (2.197366625s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-830000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-830000 get pods: (1.013759375s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.01s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-830000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-830000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.318680833s)
functional_test.go:761: restart took 33.318797208s for "functional-830000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-830000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3187143963/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.71s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-830000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-830000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-830000: exit status 115 (139.807208ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30967 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-830000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.71s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 config get cpus: exit status 14 (31.060916ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 config get cpus: exit status 14 (33.12075ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-830000 --alsologtostderr -v=1]
E0913 16:44:28.930449    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-830000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3180: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-830000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-830000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (115.773458ms)

                                                
                                                
-- stdout --
	* [functional-830000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 16:44:27.296387    3161 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:44:27.296539    3161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:44:27.296543    3161 out.go:358] Setting ErrFile to fd 2...
	I0913 16:44:27.296546    3161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:44:27.296668    3161 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 16:44:27.297784    3161 out.go:352] Setting JSON to false
	I0913 16:44:27.315394    3161 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2631,"bootTime":1726268436,"procs":484,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 16:44:27.315478    3161 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 16:44:27.321136    3161 out.go:177] * [functional-830000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I0913 16:44:27.330101    3161 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 16:44:27.330168    3161 notify.go:220] Checking for updates...
	I0913 16:44:27.336101    3161 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 16:44:27.339109    3161 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 16:44:27.340310    3161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 16:44:27.343066    3161 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 16:44:27.346164    3161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 16:44:27.347761    3161 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 16:44:27.348030    3161 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 16:44:27.352142    3161 out.go:177] * Using the qemu2 driver based on existing profile
	I0913 16:44:27.358925    3161 start.go:297] selected driver: qemu2
	I0913 16:44:27.358933    3161 start.go:901] validating driver "qemu2" against &{Name:functional-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-830000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 16:44:27.358999    3161 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 16:44:27.366096    3161 out.go:201] 
	W0913 16:44:27.370152    3161 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 16:44:27.374103    3161 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-830000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-830000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-830000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.639542ms)

                                                
                                                
-- stdout --
	* [functional-830000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 16:44:27.519394    3172 out.go:345] Setting OutFile to fd 1 ...
	I0913 16:44:27.519526    3172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:44:27.519530    3172 out.go:358] Setting ErrFile to fd 2...
	I0913 16:44:27.519533    3172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 16:44:27.519664    3172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
	I0913 16:44:27.521028    3172 out.go:352] Setting JSON to false
	I0913 16:44:27.538851    3172 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2631,"bootTime":1726268436,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0913 16:44:27.538931    3172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0913 16:44:27.544175    3172 out.go:177] * [functional-830000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I0913 16:44:27.551122    3172 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 16:44:27.551167    3172 notify.go:220] Checking for updates...
	I0913 16:44:27.558111    3172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	I0913 16:44:27.561135    3172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0913 16:44:27.564121    3172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 16:44:27.567074    3172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	I0913 16:44:27.570110    3172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 16:44:27.573310    3172 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 16:44:27.573554    3172 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 16:44:27.578056    3172 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0913 16:44:27.585064    3172 start.go:297] selected driver: qemu2
	I0913 16:44:27.585069    3172 start.go:901] validating driver "qemu2" against &{Name:functional-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19640/minikube-v1.34.0-1726243933-19640-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-830000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 16:44:27.585119    3172 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 16:44:27.591142    3172 out.go:201] 
	W0913 16:44:27.594002    3172 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 16:44:27.598122    3172 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6b7ca975-4175-4c04-8e7e-c9d3c533d47a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007846709s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-830000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-830000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-830000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-830000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6159732c-3149-44ca-9d9e-56b415935412] Pending
helpers_test.go:344: "sp-pod" [6159732c-3149-44ca-9d9e-56b415935412] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6159732c-3149-44ca-9d9e-56b415935412] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.011618458s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-830000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-830000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-830000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ed597def-0b4b-48bc-9bba-27b77075b2de] Pending
helpers_test.go:344: "sp-pod" [ed597def-0b4b-48bc-9bba-27b77075b2de] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ed597def-0b4b-48bc-9bba-27b77075b2de] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007764916s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-830000 exec sp-pod -- ls /tmp/mount
E0913 16:44:18.995278    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh -n functional-830000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 cp functional-830000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3784955360/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh -n functional-830000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh -n functional-830000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1882/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo cat /etc/test/nested/copy/1882/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1882.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo cat /etc/ssl/certs/1882.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1882.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo cat /usr/share/ca-certificates/1882.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/18822.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo cat /etc/ssl/certs/18822.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/18822.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo cat /usr/share/ca-certificates/18822.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-830000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 ssh "sudo systemctl is-active crio": exit status 1 (123.018416ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-830000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-830000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-830000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-830000 image ls --format short --alsologtostderr:
I0913 16:44:33.928413    3204 out.go:345] Setting OutFile to fd 1 ...
I0913 16:44:33.928575    3204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:33.928578    3204 out.go:358] Setting ErrFile to fd 2...
I0913 16:44:33.928581    3204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:33.928721    3204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
I0913 16:44:33.929140    3204 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:33.929211    3204 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:33.930067    3204 ssh_runner.go:195] Run: systemctl --version
I0913 16:44:33.930077    3204 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/functional-830000/id_rsa Username:docker}
I0913 16:44:33.953468    3204 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-830000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| localhost/my-image                          | functional-830000 | e36b04e6077e6 | 1.41MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-830000 | 9d4e15b943acb | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-830000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-830000 image ls --format table --alsologtostderr:
I0913 16:44:36.006837    3220 out.go:345] Setting OutFile to fd 1 ...
I0913 16:44:36.006991    3220 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:36.006995    3220 out.go:358] Setting ErrFile to fd 2...
I0913 16:44:36.006997    3220 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:36.007133    3220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
I0913 16:44:36.007543    3220 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:36.007609    3220 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:36.008446    3220 ssh_runner.go:195] Run: systemctl --version
I0913 16:44:36.008453    3220 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/functional-830000/id_rsa Username:docker}
I0913 16:44:36.029731    3220 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-830000 image ls --format json --alsologtostderr:
[{"id":"9d4e15b943acb0c5048ae13945d464fa053aa1776d9f6d8dcddd45a1686d30de","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-830000"],"size":"30"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f3
8dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-830000"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e
5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"e36b04e6077e62cda381467c105ec8c44ea9dd79a4748b1b6b3761901e2f5e3f","repoDigests":[],"repoTags":["localhost/my-image:functional-830000"],"size":"1410000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size
":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-830000 image ls --format json --alsologtostderr:
I0913 16:44:35.939461    3218 out.go:345] Setting OutFile to fd 1 ...
I0913 16:44:35.939630    3218 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:35.939633    3218 out.go:358] Setting ErrFile to fd 2...
I0913 16:44:35.939636    3218 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:35.939760    3218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
I0913 16:44:35.940203    3218 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:35.940269    3218 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:35.941123    3218 ssh_runner.go:195] Run: systemctl --version
I0913 16:44:35.941132    3218 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/functional-830000/id_rsa Username:docker}
I0913 16:44:35.962855    3218 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-830000 image ls --format yaml --alsologtostderr:
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 9d4e15b943acb0c5048ae13945d464fa053aa1776d9f6d8dcddd45a1686d30de
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-830000
size: "30"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-830000
size: "4780000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-830000 image ls --format yaml --alsologtostderr:
I0913 16:44:33.996236    3206 out.go:345] Setting OutFile to fd 1 ...
I0913 16:44:33.996396    3206 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:33.996400    3206 out.go:358] Setting ErrFile to fd 2...
I0913 16:44:33.996402    3206 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:33.996516    3206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
I0913 16:44:33.996898    3206 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:33.996961    3206 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:33.997755    3206 ssh_runner.go:195] Run: systemctl --version
I0913 16:44:33.997763    3206 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/functional-830000/id_rsa Username:docker}
I0913 16:44:34.018560    3206 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 ssh pgrep buildkitd: exit status 1 (54.670041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image build -t localhost/my-image:functional-830000 testdata/build --alsologtostderr
2024/09/13 16:44:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-830000 image build -t localhost/my-image:functional-830000 testdata/build --alsologtostderr: (1.755012417s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-830000 image build -t localhost/my-image:functional-830000 testdata/build --alsologtostderr:
I0913 16:44:34.115059    3212 out.go:345] Setting OutFile to fd 1 ...
I0913 16:44:34.115310    3212 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:34.115316    3212 out.go:358] Setting ErrFile to fd 2...
I0913 16:44:34.115318    3212 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 16:44:34.115446    3212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19640-1360/.minikube/bin
I0913 16:44:34.115906    3212 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:34.116646    3212 config.go:182] Loaded profile config "functional-830000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 16:44:34.117558    3212 ssh_runner.go:195] Run: systemctl --version
I0913 16:44:34.117571    3212 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19640-1360/.minikube/machines/functional-830000/id_rsa Username:docker}
I0913 16:44:34.140750    3212 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2391701394.tar
I0913 16:44:34.140830    3212 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0913 16:44:34.144327    3212 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2391701394.tar
I0913 16:44:34.145915    3212 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2391701394.tar: stat -c "%s %y" /var/lib/minikube/build/build.2391701394.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2391701394.tar': No such file or directory
I0913 16:44:34.145937    3212 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2391701394.tar --> /var/lib/minikube/build/build.2391701394.tar (3072 bytes)
I0913 16:44:34.154652    3212 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2391701394
I0913 16:44:34.158233    3212 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2391701394 -xf /var/lib/minikube/build/build.2391701394.tar
I0913 16:44:34.161953    3212 docker.go:360] Building image: /var/lib/minikube/build/build.2391701394
I0913 16:44:34.162018    3212 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-830000 /var/lib/minikube/build/build.2391701394
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:e36b04e6077e62cda381467c105ec8c44ea9dd79a4748b1b6b3761901e2f5e3f done
#8 naming to localhost/my-image:functional-830000 done
#8 DONE 0.0s
I0913 16:44:35.739774    3212 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-830000 /var/lib/minikube/build/build.2391701394: (1.577749917s)
I0913 16:44:35.739866    3212 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2391701394
I0913 16:44:35.743528    3212 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2391701394.tar
I0913 16:44:35.746814    3212 build_images.go:217] Built localhost/my-image:functional-830000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2391701394.tar
I0913 16:44:35.746831    3212 build_images.go:133] succeeded building to: functional-830000
I0913 16:44:35.746836    3212 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.694094791s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-830000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-830000 docker-env) && out/minikube-darwin-arm64 status -p functional-830000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-830000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-830000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-830000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-b6q6c" [2c3dedab-f384-46e1-84be-9ecde2a92fb0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-b6q6c" [2c3dedab-f384-46e1-84be-9ecde2a92fb0] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.006704208s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image load --daemon kicbase/echo-server:functional-830000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image load --daemon kicbase/echo-server:functional-830000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-830000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image load --daemon kicbase/echo-server:functional-830000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image save kicbase/echo-server:functional-830000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image rm kicbase/echo-server:functional-830000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-830000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 image save --daemon kicbase/echo-server:functional-830000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-830000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-830000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-830000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-830000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3008: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-830000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-830000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-830000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c25d47ff-f6e4-4535-98f8-d44d3e1d43a9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c25d47ff-f6e4-4535-98f8-d44d3e1d43a9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.010475s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 service list -o json
functional_test.go:1494: Took "78.994459ms" to run "out/minikube-darwin-arm64 -p functional-830000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32006
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32006
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-830000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.31.252 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-830000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0913 16:44:19.317210    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "79.382667ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.564208ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "82.224792ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.56375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2883236247/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726271059614073000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2883236247/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726271059614073000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2883236247/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726271059614073000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2883236247/001/test-1726271059614073000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (57.01725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0913 16:44:19.960781    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 13 23:44 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 13 23:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 13 23:44 test-1726271059614073000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh cat /mount-9p/test-1726271059614073000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-830000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bf94b7b9-d453-4c9e-b6b2-f2087655e962] Pending
E0913 16:44:21.243384    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [bf94b7b9-d453-4c9e-b6b2-f2087655e962] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bf94b7b9-d453-4c9e-b6b2-f2087655e962] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0913 16:44:23.806957    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [bf94b7b9-d453-4c9e-b6b2-f2087655e962] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.010521s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-830000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2883236247/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2185640952/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (54.834416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (96.95375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2185640952/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 ssh "sudo umount -f /mount-9p": exit status 1 (60.455042ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-830000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2185640952/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2781637999/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2781637999/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2781637999/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T" /mount1: exit status 1 (72.245959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-830000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-830000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2781637999/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2781637999/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-830000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2781637999/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-830000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-830000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-830000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (178.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-475000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0913 16:44:39.174369    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:44:59.658293    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:45:40.621464    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
E0913 16:47:02.544009    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/addons-979000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-475000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (2m58.032055708s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (178.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-475000 -- rollout status deployment/busybox: (3.724477958s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-bn8l9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-drkgr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-fr77s -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-bn8l9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-drkgr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-fr77s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-bn8l9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-drkgr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-fr77s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-bn8l9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-bn8l9 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-drkgr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-drkgr -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-fr77s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-475000 -- exec busybox-7dff88458-fr77s -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-475000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-475000 -v=7 --alsologtostderr: (54.662394417s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-475000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp testdata/cp-test.txt ha-475000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile750898521/001/cp-test_ha-475000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000:/home/docker/cp-test.txt ha-475000-m02:/home/docker/cp-test_ha-475000_ha-475000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m02 "sudo cat /home/docker/cp-test_ha-475000_ha-475000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000:/home/docker/cp-test.txt ha-475000-m03:/home/docker/cp-test_ha-475000_ha-475000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m03 "sudo cat /home/docker/cp-test_ha-475000_ha-475000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000:/home/docker/cp-test.txt ha-475000-m04:/home/docker/cp-test_ha-475000_ha-475000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m04 "sudo cat /home/docker/cp-test_ha-475000_ha-475000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp testdata/cp-test.txt ha-475000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile750898521/001/cp-test_ha-475000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m02:/home/docker/cp-test.txt ha-475000:/home/docker/cp-test_ha-475000-m02_ha-475000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000 "sudo cat /home/docker/cp-test_ha-475000-m02_ha-475000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m02:/home/docker/cp-test.txt ha-475000-m03:/home/docker/cp-test_ha-475000-m02_ha-475000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m03 "sudo cat /home/docker/cp-test_ha-475000-m02_ha-475000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m02:/home/docker/cp-test.txt ha-475000-m04:/home/docker/cp-test_ha-475000-m02_ha-475000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m04 "sudo cat /home/docker/cp-test_ha-475000-m02_ha-475000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp testdata/cp-test.txt ha-475000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile750898521/001/cp-test_ha-475000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m03:/home/docker/cp-test.txt ha-475000:/home/docker/cp-test_ha-475000-m03_ha-475000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000 "sudo cat /home/docker/cp-test_ha-475000-m03_ha-475000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m03:/home/docker/cp-test.txt ha-475000-m02:/home/docker/cp-test_ha-475000-m03_ha-475000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m02 "sudo cat /home/docker/cp-test_ha-475000-m03_ha-475000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m03:/home/docker/cp-test.txt ha-475000-m04:/home/docker/cp-test_ha-475000-m03_ha-475000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m04 "sudo cat /home/docker/cp-test_ha-475000-m03_ha-475000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp testdata/cp-test.txt ha-475000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile750898521/001/cp-test_ha-475000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m04:/home/docker/cp-test.txt ha-475000:/home/docker/cp-test_ha-475000-m04_ha-475000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000 "sudo cat /home/docker/cp-test_ha-475000-m04_ha-475000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m04:/home/docker/cp-test.txt ha-475000-m02:/home/docker/cp-test_ha-475000-m04_ha-475000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m02 "sudo cat /home/docker/cp-test_ha-475000-m04_ha-475000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 cp ha-475000-m04:/home/docker/cp-test.txt ha-475000-m03:/home/docker/cp-test_ha-475000-m04_ha-475000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-475000 ssh -n ha-475000-m03 "sudo cat /home/docker/cp-test_ha-475000-m04_ha-475000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0913 16:58:42.472328    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m19.448791584s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (79.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-014000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-014000 --output=json --user=testUser: (1.945292625s)
--- PASS: TestJSONOutput/stop/Command (1.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-898000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-898000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.781333ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"225adaef-bfc3-41b2-b432-d51c7ef654a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-898000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c2c222b-2a2f-40ab-945b-d9a7a1e4b9e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"e9859363-7b4c-408e-965a-b1fb58a263ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig"}}
	{"specversion":"1.0","id":"7fd29212-38f7-409a-99ce-8cb40c689b48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f4058b0d-42e1-4aa4-af1c-5e4aa9828a89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3b754954-f586-44b5-849b-7d25f27e867b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube"}}
	{"specversion":"1.0","id":"87b4ad1a-4d2e-48b0-a6b7-f20d3514c0d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e08f1460-82af-4e73-8b6e-2f979007fe6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-898000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-898000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-004000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-004000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (106.659084ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-004000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19640-1360/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19640-1360/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-004000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-004000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.243917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-004000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-004000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.681922667s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
E0913 17:21:45.531693    1882 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19640-1360/.minikube/profiles/functional-830000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.794163125s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-004000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-004000: (3.307057792s)
--- PASS: TestNoKubernetes/serial/Stop (3.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-004000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-004000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.668375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-004000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-004000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-434000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-601000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-601000 --alsologtostderr -v=3: (3.668831334s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-601000 -n old-k8s-version-601000: exit status 7 (49.464208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-601000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-098000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-098000 --alsologtostderr -v=3: (3.314009334s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-098000 -n no-preload-098000: exit status 7 (54.845584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-098000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-185000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-185000 --alsologtostderr -v=3: (3.509774084s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-865000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-865000 --alsologtostderr -v=3: (2.057830625s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-185000 -n embed-certs-185000: exit status 7 (57.291125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-185000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-865000 -n default-k8s-diff-port-865000: exit status 7 (52.394458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-865000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-516000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-516000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-516000 --alsologtostderr -v=3: (3.229296458s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-516000 -n newest-cni-516000: exit status 7 (54.063667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-516000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-234000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-234000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-234000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-234000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-234000"

                                                
                                                
----------------------- debugLogs end: cilium-234000 [took: 2.348790125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-234000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-234000
--- SKIP: TestNetworkPlugins/group/cilium (2.45s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-754000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-754000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard